I have been thinking a lot about a notion of self-paradoxical knowledge, meaning knowledge that actively makes your reasoning worse.
For example, knowledge of extremely rare diseases causes the mind to over-evaluate their importance by many orders of magnitude (there are many variants of this effect).
Or trying to explain some concepts of the object/subject construction tend to use a language that is grounded on the concept of a shared objective reality that furthers from the concept true understanding -- in other words, "the tao that can be named is not the tao".
I didn't think "There Is No Antimemetics Division" did very well with its premise, but the premise is quite fascinating, and it's the closest I've seen to this concept. Are there other explorations of similar ideas?
> I have been thinking a lot about a notion of self-paradoxical knowledge, meaning knowledge that actively makes your reasoning worse.
There could be a hypothetical class of ideas that just knowing about them is actively harmful. For a fictional example, imagine learning how to detect a hostile alien race that has been living with us on earth all this time. Or if one day we invent a thought experiment that induces psychosis to anyone that tries to unravel it.
I had seen this, but all the examples correspond to having an actual, external threat as a result of this knowledge.
I thought more about the buddhist parable that men don't know when they'll die, because only buddhas are able to live with this information.
I guess it's very close to 'malinformation', but this is still related to an external actor manipulating what you know with an external goal, rather than intrinsic to the information.
The opposite of this is also fascinating to me too. There are false beliefs that make people who hold them better in some metrics. Like, the idea that hard work leads to success. We all know there is some element of luck, but even so, people who discount luck and only believe in hard work tend to do better.
That's a good point. I think this one can be easily be resolved on a factual level, since hard work is one of the few variables you can actually control. But it is more interesting from an emotional point of view, since in many cases that would an article of faith with the implicit fear that it might not be true.
There are variations of this, such as composition theory in art getting good results based on completely false assumptions, but these tend to fall under epistemic underdetermination.
There is a similar path with Chinese painting. The language of painting was refined over millennia, but the last 2 centuries caused an extremely rapid integration of Western influences.
This article is interesting because language is like water to a fish, the invisible medium humans live through. Since art is more 'foreign' and 'superfluous', the change were more obvious and there was much more debate regarding this evolution than in linguistics.
I discussed with a painter in the artistic lineage of Shi Guoliang, and he told me he remembered how much that could be seen as "Western art painted with a Chinese brush".
I think the criticism was more directed towards such painters than say the Lingnan school that explicitly sought to revitalize Chinese painting through foreign influences, because it's really in the foundations of the painting -- how perspective and light are tackled through the 'scientific' system rather than the elaborate symbolic system of classical painting.
> We assert that artificial intelligence is a natural evolution of human tools developed throughout history to facilitate the creation, organization, and dissemination of ideas, and argue that it is paramount that the development and application of AI remain fundamentally human-centered.
While this is a noble goal, it seems obvious that this isn't how it usually goes.
For instance, "free market" is often used as a dogma against companies that are actively harmful to society, as "globalization" might be.
An unstoppable force, so any form of opposition is "luddite behavior".
Another one is easier transport and remote communication, that generally broke down the social fabric.
Or social media wreaking havoc among teen's minds.
From there, it's easy to see why the technological system might be seen as an inherent evil.
In 1872's Erewhon, Butler already described the technological system as a force that human society could contain as soon as it tolerated it.
There are already many companies persecuting their employees for not using AI enough, even when the employee's response is that the quality of its output is not good enough for the work at hand, rather than any ideological reason.
I'm neither optimistic nor pessimistic about the changes that AI might bring, but hoping it to become "human-centered" seems almost as optimistic as hoping for "humane wars".
> "free market" is often used as a dogma against companies that are actively harmful to society
This is a predominantly America-specific piece of propaganda, and it's pretty recent.
Adam Smith's ideas are primarily arguments against mercantilism (e.g. things like using tariffs to wield self-interested state power), something he showed to be against the common good. The "invisible hand" concept is used to show how self-interested action can, under conditions of *competitive markets*, lead to unintentional alignment with the common good.
Obviously that's a significant departure from the way it's commonly used today, where Thiel's book has influenced so many entrepreneurs into believing Monopolies are Good.
But the history of this is very Cold War-influenced, where "free markets" were politically positioned as alternatives to the USSR's "planned economy", and slowly pushed to depart further and further from Adam Smith's original argument about moral philosophy.
The government is a rare example of an extremely strong monopoly and not just a duopoly or a company holding significant marketshare. And yet people never seem to criticise it on those grounds despite it suffering from all of the same problems that corporate monopolies are accused of.
The amount of political control individuals have in a democracy is much less than people think. I get to vote every few years for a small preselected handful of representatives, none of whom represent me or my views, and that vote is usually meaningless anyways. In my political life none of the parties I have voted for have won an election. So tell me how much control of the government do I have? How much do any of us have? Especially when the government itself often goes against the wishes of their voters anyways. The reason democracy is preferable to a dictatorship is because it prevents the government from doing things which are widely unpopular, like committing acts of genocide and political violence. But that's it really.
Remember, governments have a monopoly in areas of life that impact everybody constantly. Private monopolies do not, in every case you can simply ignore them and not purchase their products, and by doing so you exert more control over them than you do casting a meaningless vote every few years.
The idea of democracy being will of the people is pure fantasy. It's useful to prevent governments from doing really bad things, that's it. It doesn't even matter who you vote for, broadly speaking. Just that you are able to exert that pressure.
It has been funny to watch the rise of "China is beating us" rhetoric against the steady backdrop of "mercantilism is obsolete/bad" dogma, because the elephant in the room is that China has been running a textbook mercantilist playbook.
Looks to my uneducated eye like China has been enforcing competitive markets internally, with consistent economic policies. Meanwhile the US has stopped enforcing antitrust altogether and keeps violently changing its mind about economic policy every four years.
And then externally Chinese policy is oriented around suppressing the value of its currency, which is basically a monopolist's tactic--artificially lowered prices in order to crowd out competition.
I think that's mercantilist-ish, but kind of a modern version?
It's definitely the opposite of what the US does, the currency is the world reserve and therefore drives the price of the dollar above what it would be without trade, which I guess makes exporting from the US much more difficult?
Anyone who is an expert in global economics please correct me here :)
I mean they did execute a wealthy banker a couple years ago. So I think the mercantile class occupies a different place in society there than in America
> Thiel's book has influenced so many entrepreneurs into believing Monopolies are Good.
Haven't read his book, but the idea that monopolies are good isn't typically made in a vacuum, it's made relative to alternatives, most often "ham-fisted government intervention". It's easier to take down a badly behaving monopoly than to change government, so believing monopolies are better than the alternatives seems like a decent heuristic.
>Haven't read his book, but the idea that monopolies are good isn't typically made in a vacuum, it's made relative to alternatives, most often "ham-fisted government intervention". It's easier to take down a badly behaving monopoly than to change government, so believing monopolies are better than the alternatives seems like a decent heuristic.
What? How is the first alternative poor government instead of multipolar competing companies? When was the last time a Monopoly was actually broken up in the US? ATT/Bell 50 years ago? lol
A Monopoly implies an organization powerful enough to stop competition. Seems like this solution that relies on competitors is fatally flawed. If there are enough competitors to meaningfully compete then there isn't a monopoly.
This seems like a classic straw man argument. Plutocratic oligarchs have been making the argument that private monopolies are better than representative democracy at basically any societal function for decades without any actual data.
This seems like a classic straw man argument. Plutocratic oligarchs have been making the argument that private monopolies are better than representative democracy at basically anything for decades.
Economic behavior is inherently game theoretic - agents take various actions and get some positive/negative reward as a result. Whether an agent's reward is positive or negative and of what magnitude, depends on the strategies employed by all agents. If some agents adopt new strategies, the reward calculus for everyone involved can completely change [1].
Over the past few centuries, countless new economic structures and strategies have been discovered and practiced. The rewards for the same action today and in the past can be completely different due to this.
So to me, if someone claimed more than a few decades ago that certain economic strategies and structures are good or bad, its simply not worth listening to them, unless someone reconfirms that the old finding still holds with the latest range of strategies. In that case, the credit and citation goes to that new someone, not the ghosts of the past.
Game theory is just math. As with any math, the calculations can all work out, but that says nothing of how it reflects nature. All you can say is that if the axioms are all true, then this is the necessary outcome. Look for string theory as a cautionary tale here.
Game theory assumes rational systems. But we have over 6 decades of behavior science which contradicts that fundamental assumption. Economic behavior is not necessarily rational, and subsequently it is not inherently game theoretic. You will find plenty of dogmatic, idealistic, superstitious, counter productive, etc. behaviors in an economy. You need psychology, and not just math, to describe the economics which happens in the real world.
Game theory definitely does not require rational agents. Game theory says there are agents with certain specified strategies. Whether a strategy is rational from the underlying theory of value the agent adopts is completely separate matter. For example, its very common to study agents who always do one action no matter what others do or what the reward function is. Hard to call such actors rational, but that does not stop as from studying them.
When I said rational I meant it in this way. That rational agents will perform in a way in which maximizes their utility (or reward function). In psychology we call this theory homo economicus[1]. Regardless of theories of value, perceived rewards, and utilities, human beings have biases, prejudice, superstition, dogma, etc. etc. In real economies these non-strategic behaviors are consistently observed. This is why I say that economics are not just game theories, they are psychology, sociology, religion, as much as they are zero-sum games between actors.
> Hard to call such actors rational, but that does not stop as from studying them.
This is precisely why I object to your first post. “Shut up and do the math” has not done the wonders which string theorists had hoped. Game theory is a perfectly fine way to mathematics, and to study certain strategies, but it tells us nothing about the nature of economies in the real world.
> if someone claimed more than a few decades ago that certain economic strategies and structures are good or bad
As you point out, it is all game theory.
But things that arrange for the game to be more beneficial to everyone, that align our interests more, deserve to be called "good", regardless of their inability to universally do so.
The latter would be an impossible bar for anything.
Where I find things frustrating, is when someone thinks because something is "good", it somehow becomes "enough". (Think, capitalized versions of different economic schools of thought.)
Not even sure if AI was ever "human-centered" . DARPA funded a large amount of AI reseearch from even the 1960s. The DART tool used in the Gulf War made back all of their previous investments. [1]
Alright AI above was more like DARPA funded research programs that resulted in algorithms that allowed for the logistics program to be implemented where the department of war (aka Department of Defense at the time) to have the first gulf war be more efficient to the point that they did more than break even on the grants they gave out.
I just re-read my comment (too late for edits) and there are a number of typos (including missing "not") that significantly degraded the syntax, but the point kind of came across anyway.
No it wasn't. Look at Joseph Stiglitz (Globalization and Its Discontents) and Ha-Joon Chang (Bad Samaritans, Kicking Away the Ladder) for counter-examples.
Why not, because they're too old to learn, or because the support infrastructure is not there? I believe most people are capable of continued learning, but they might need help (financial etc.) to make the transition.
This isn't correct. The deal is that the poor countries get development and increased employment, and the rich countries get lower prices. Generally speaking both types of countries get richer.
That some workers lost their jobs is a symptom of any change. I don't know why people always get upset people losing their jobs. It's like death, if no one died relatively few people would be born. If you resist job losses you reduce overall employment and economic development.
Are you serious? People get upset about losing jobs because they need jobs to pay their bills. Further, we often build our life identities around work; if you're a good car mechanic or a successful restaurant owner, you're proud of that. It's a part of you.
Having to repeatedly restart your career is risky, painful, and demoralizing. I have no problem seeing why people don't like that and why it can lead to populist backlash or even violent revolutions (as it did in the past).
By the way, to address your closing comment: people don't like dying either and tend to get upset when others die?
I don't think the point is that the transition isn't difficult. It is that there is an overall benefit that outweighs the challenges of the transition.
The sad part is that industrializing societies have not been very good at reconciling the benefits with the costs. The benefits first go to a select few and have seeped out to the masses slowly. Railroads in the US are a good example. The wealth accumulated by the Vanderbilts, Hills and Harrimans, did not get redistributed in any kind of equitable manner. However, everyday people did eventually gain a lot of benefit form of those railroads through economic expansion. (None of which address the loss of the native Americans, whose losses should also be part of the equation.)
My impression is that the transition is such an open-ended process that you can’t really call it that. It’s unclear if and when the challenges will be overcome.
> I don't think the point is that the transition isn't difficult. It is that there is an overall benefit that outweighs the challenges of the transition.
In the abstract, sure. But not when you're on the receiving end. It's like with NIMBYism: we all roll our eyes at NIMBYs until it's actually our own backyard. You're not going to convince a coal miner that they're better off learning to code. You're not going to convince a software engineer that they're better off in the mines.
You're missing my point. Job losses are a fact of life, just like death. Why should people get upset about the fact that someone might lose their job, or die? It's not amoral. These things happen constantly to millions of people, we'd we worn out. We happily send young healthy people to their death fighting in wars so we don't have to. I don't see people weeping because the armed forces exist.
Or is this just some sort of PC bullshit, that we can't talk about this sort of progress without carefully lamenting job losses? If you're not useful doing a job, why should you be employed in it? That's the bottom line.
Society is better if we sacrifice one horse and buggy driver job for two engineering jobs. The drivers suffer from that, but the net win for society is so plainly obvious that it's a better investment to retrain the driver or just pay the off rather than support a job that dying anyways.
> Society is better if we sacrifice one horse and buggy driver job for two engineering jobs.
That's a "statistic" you're pulling out of your butt, and it's doing a lot of work. No one ever knows if something like that will actually happen.
It could actually turn out that AI sacrifices 100 engineering jobs for 10 low-level service or prostitution jobs and a crap-ton of wealth to those already rich.
> The drivers suffer from that, but the net win for society is so plainly obvious that it's a better investment to retrain the driver or just pay the off rather than support a job that dying anyways.
But what actually happens is our free-market society doesn't give a shit. No meaningful retraining happens, no meaningful effort goes into cushioning the blow for the "horse and buggy driver." Our society (or more accurately, the elites in charge) go tell those harmed to fuck off and deal with it.
> It could actually turn out that AI sacrifices 100 engineering jobs for 10 low-level service or prostitution jobs and a crap-ton of wealth to those already rich.
That's where wealth redistribution (Taxation) comes in. The USA is not good at progressive taxation, but everyone could be better off if it were implemented properly.
First, you are thinking of people whose main source of wealth are assets, not income.
Also, I don't think inequality is a problem, it's better that than everyone being equal, because as history shows, even then, there are some that are more equal than others, and others are just poor.
It would be ok if people lost jobs, and the products and services become cheap correspondingly.
But it would be a problem if people lost jobs and all the product and services keep costing the same as before, and get costlier over time as before...
The ELIZA program, released in 1966, one of the first chatbots, led to the "ELIZA effect", where normal people would project human qualities upon simple programs. It prompted Joseph Weizenbaum, its author, to write "Computer Power and Human Reason" to try to dispel such errors. I bought a copy for my personal library as a kind of reassuring sanity check.
Flying back from Beijing, I had bought a lot of books. I filled my bags with it, so they were very heavy. When the agent came to try to check my backpack, he casually grabbed it, and fell on the conveyor belt trying to lift it. He looked at me with shock. "I'm done", I thought. He opened the bag, and saw a box of zongzi the university gave me, on top of the books. He instantly became radiant, gave me a pat on the back, and just indicated the way.
I know it's a joke, and they probably get only a tiny minority of cases... but the Chinese government makes a huge show of executing people that do stuff like this.
Also not sure about the usage of theater there. I'd probably swap it out for "show". Never heard theater used like that although it is pretty close to a standard idiom, "to make a show of something".
Let's take the definition of "video-game art" as the art of defining interactive experiences that open themselves up upon mastery.
This is the original definition of what video-games were at the start (Pac-Man, Space Invaders).
The mastery takes effort to learn; the game, to incentivize this effort, rewards the player when they do well, and punish them when they don't.
The almost immediate nature of the feedback loop makes learning faster than almost any other human activity.
Given the nature of the medium, you can tackle a theme (space invaders), and even a story on top of it.
This is good for critics; they know stories, they know that books are the highest form of art for intellectuals.
The currency of critics in the system (media/advertisement/entertainment industry loop) is credentialism -- except for purely independent critics you have their own platform and exist through a complex bidirectional relationship with their audience.
However, the story is almost always at odds with gameplay.
A story limits the freedom the gameplay system can respond to the player by railroading certain outcomes.
Often, adapting a story implies different scenes that cannot fit into a game genre, so it's more appropriate to a collection of mini-games rather than what people generally consider to be a game.
Video-game stories tend towards tropes that don't cause such problems for itself, such as the 'big tournament' arc.
Of course, certain genres have much more freedom (RPGs), but still a definite story means certain characters can't or have to die, etc, which remove the meaning of player choices.
The mastery approach hasn't gone away. But critics hate it; the general philosophy of the industry is inclusivity, which is at direct odds with a competitive direct ranking of players according to skills. It requires effort, and rewards innate ability -- reflex, memory, ability to make mental computations, ... are all advantages that generally directly translate into in-game advantages. So the critics industry had been relentless at disparaging the games that directly emphasized mastery (arcade designs, the infamous 'God Hand' review) and elevate what are generally called 'movie-games' that have worked at eliminating these aspects ('Last of Us', later 'God of war') to let all players experience the story fully without interacting with the gameplay in any meaningful manner.
They had to compromise because of the success of Dark Souls that brought mastery back to the forefront, but this is where the total incompetence of mainstream critics is truly glaring (see the infamous 'Cuphead' journalist moment). As a result, their critiques are rarely anything more than press releases with a final score based on production value and not based on any insight into the depth of game mechanics and systems.
I'm surprised not to see Chris Crawford mentioned, as The Art of Computer Game Design (1984) makes the central point of this article at the very beginning, and is a primary source of video-game critique.
It seems the shader transparent version is badly aliased? The effect is less noticeable on Chrome than Firefox, but it is still quite visible. This defeats the purpose of vector graphics...
It's a nice trick to play around, but that limits its usefulness.
It is indeed badly aliased. The technique demonstrated does not take into account antialiasing in the initial render, which causes this issue. There are ways to improve it, but I would advise against this approach in general since it doesn't handle these edge cases well.
I have most sympathy for the ideals of free software, but I don't think prominently displaying "What's bad about:", include ChatGPT, and not make a modicum of effort to sketch out a basic argument, is making any service to anyone. It's barely worth a tweet, which would excuse it as a random blurb of barely coherent thought spurred by the moment. There are a number of serious problems with LLMs; the very poor analogies with neurobiology and anthropomorphization do poison the public discourse to a point where most arguments don't even mean anything. The article itself present LLMs as bullshitters, which is clearly another anthropomorphization, so I don't see how this really addresses these problems.
Whats bad about: RMS Not making a decent argument make your position look unserious
The objection that is generally made to RMS is that he is 'radically' pro-freedom rather than be willing to compromise to get 'better results'. This is something that makes sense, and that he is a beacon for. It seems such argument weaken even this perspective.
"In theory it's a great idea, in practice not so much."
I feel that's the lesson anyone who toyed with libertarians ideals ultimately come to. It just takes a bit longer for some than others.
It's also harder to realize if you're making mad bank on it, rather than be part of the idiots who blew their hard-earned money on some technical misunderstanding, scam, or retro-active regulation.
The comparison are very useful but also quite limited in terms of styles. Models tend to have extremely diverse abilities in following a given style against steering to its own.
It's pretty obvious that OpenAI is terrible at it -- it is known for its unmissable touch. However, for Flux it really depends on the style. They already posted at some point that they changed their training to avoid averaging different styles together, which is the ultimate AI look. But this is at odds with the goal to directly generate images that are visually appealing, so the style matching is going to be a problem for a while, at least.
Style is mostly irrelevant for editing, since the goal is to integrate seamlessly with the existing image. The focus is on performing relatively surgical edits or modifications to existing imagery while minimizing changes to the rest of the image. It is also primarily concerned with realism, though there are some illustrative examples (the JAWS poster, Great Wave off Kanagawa).
This contrasts with the generative section though even then the emphasis is on prompt adherence, and style/fidelity take a backseat (which is honestly what 99% of existing generative benchmarks already focus on).
Oh, thank you for your reply. We may have different definitions of style and what editing would mean.
If you look for example at "Mermaid Disciplinary Committee", every single image is in a very different style, each that you can consider a default of what the model assume would be for the specific prompt. It's quite obvious that these styles were 'baked in' the models, and it's not clear how much you can steer in a specific style.
If you look at "The Yarrctic Circle", a lot more models default to a kind of "generic concept art" style (the "by greg rutkowski" meme) but even then I would classify the results as at least 5 distinct styles. So for me this benchmark is not checking style at all, unless you consider style to be just around 4 categories (cartoon, anime, realistic, painterly).
So regarding image editing, I did my own tests at the first release of Flux tools, and found that it was almost impossible to get any decent results on some specific styles, specifically cartoon and concept art styles. I think the tools focus on what imaginary marketing people would want (like "put this can of sugary beverage into an idyllic scene") rather than such use cases. So editing like "color this" or other changes would just be terrible, and certainly unusable.
I didn't go very far with my own benchmarks because my results were just so bad.
But for example, here's a line art with the instruction to color it (I can't remember the prompt, I didn't take notes).
Still, you can see that ChatGPT just throw everything out and does not do a minimal attempt at respecting style.
Flux is quite bad, but it follows the design much more (although it gets completely confused by it) that it seems that with a whole lot of work you could get something out of it.
Yeah so NOVEL style transfer without the use of a trained LoRA is, to my knowledge, still a relatively unsolved problem. Even in SOTA models like Nano Banana Pro, if you attach several images with a distinct artistic style that is outside of its training data and use a prompt such as:
"Using the attached images as stylistic references, create an image of X"
I'm pretty sure that some model at least advertised that it would work. I also think your example was in the training data at some point least, but I suspect these styles are kind of pruned when the models are steered towards "aesthetically pleasing" outputs which are often used as benchmarks. Thanks for the replies, it's quite informative.
Now you should be able to see that the generated image is stylistically not even close to the references (which are early works by Yoichi Kotabe). Pay careful attention to the characters.
With locally hostable models, you can try things like Reference/Shuffle ControlNets but that's not always successful either.
I didn't think "There Is No Antimemetics Division" did very well with its premise, but the premise is quite fascinating, and it's the closest I've seen to this concept. Are there other explorations of similar ideas?
reply