top of page
  • Writer's pictureBhang, Youngmoon

Making Art in the Age of Artificial Intelligence


by BHANG Youngmoon (Photographer, Incheon, S. Korea)

BHANG Youngmoon's portrait

Every story is steeped in results and outcomes.

Let's take the other way.

  • Biological intelligence and machine intelligence have different fundamentals. Living things have developed intelligence based on recognizing and moving around in order to preserve themselves. Environment, experience, and their interpretation determine the characteristics of an entity and create diversity. Thought is movement and movement is thought.

  • In a formal system, there are things that can be expressed but cannot be proven. 'Computer' means a person who calculates, or today, a machine that handles formal systems. Artificial intelligence is a program implemented on a device that implements a formal system. In other words, it came from mathematics. The world can be expressed through mathematics, but it is not mathematics itself. We often confuse this. It means confusing the method of description and the object, the model and the object itself. If the world is a formal system, everything becomes nothing more than meaningless syntax. This leads to the conclusion that even consistency cannot be stood.

  • In a feedback structure, a 'thing' without self-negation cannot be self-aware. Living things basically do not fall into the self-reference paradox. Humans' experience of time is also a phenomenon that occurs because self-identity can't be stand firm. The individualized neural connectivity of each living organism changes from moment to moment. Synapses change the strength of signals based on past experiences. It is not based on the binary of consistently on/off, that is, expressed as 0 and 1. This ambiguity makes its self-existence clear, but at the same time it is blurry and loose. There may be clarity in things that do not change, but there is uncertainty in things that change. Perhaps this state of balance defines living things.

If we were to summarize the Industrial Revolution in one word, it could be said to be “replacement of manpower by capital.” This is something that happens in a similar way today, and for example, common predictions that jobs will be lost due to artificial intelligence (AI) are basically based on this direction.

If we focus on results and performance, AI is an indescribable threat. Business wants performance and politics wants results that they can use for themselves. The manifestation of so-called ‘concentrated demands’ within a society appears in this way. Societies seeking development need results - publishable and applicable results - and achievements. Naturally, most discussions are bound to be based on such thoughts. AI development requires a lot of finance, for this reason results and performance must be guaranteed for investment and support.

Therefore, all stories are focused on results and achievements.

That's why I'd like to tell a different story.

Humans and AI are slightly different from each other

-- based on #Rebecca_Goldstein

Mathematics is basically done through a priori reasoning, and mathematically established conclusions cannot be overturned. As a result, we have accumulated mathematical knowledge for the past 2,600 years, regardless of any paradigm shift. Mathematics is a very unique knowledge in that only mathematics has an 'axiom system' in the strict sense.

Axioms are obvious fundamental truths that make up the axiomatic system.

In order to have confidence as the whole, questions about the truth of the basic axioms must be resolved, and the motivation that forms the basis of the axiomatic system is to solidify certainty by minimizing and even eliminating appeals to intuition. This goal leads to the idea of a formal system. A formal system is an axiomatic system that eliminates appeals to intuition. This is what mathematician David Hilbert sought to pursue, and was defeated by Kurt Gödel at the Königsberg Conference in 1930.

Gödel proved that Hilbert's problem could not be solved, and that the consistency of the axioms in the arithmetic system could never be solved by a finite formal proof. Therefore, he very calmly concluded that there could be no proof to support Hilbert's plan and that formalism could ultimately do nothing.

The computational process of a computer can be said to be a formal system itself. These are computational conclusions that have no meaning. Jerry Fodor and Ernest Lepore argue that artificial neural networks cannot represent meaning. Even if you completely agree with The Computational Theory of Mind, there are fundamental differences between our intelligence and that of machines. Moreover, the formal system is based on the idea of ensuring mathematical certainty by eliminating intuition. Gödel's proof of uncertainty shows the constantly, endlessly open world.

Accepting Gödel's position shows that it is compatible with the possibility that something new can occur even in a situation where the system - the completeness of the highly logical structure - is governed by that structure. This is a very important perspective in understanding the evolution of living creatures, and it may be an important perspective in explaining today's increasingly complex machine learning.

However, if you lift the curtain, the intelligence realized in machines by increasing the number of transistors used in calculations on the 'formal system' and increasing the parameters that deal with language is different from what we have called 'intelligence' or 'understanding' until now.

We can understand that it is very different.

Where does ‘meaning’ come from?

Eleanor Rosch's <prototype theory> is a field of cognitive science that deals with specific psychological and cognitive linguistic areas. According to this theory, instances are judged representative of a given concept by the extent to which they share properties.

Paul Churchland argues that, similar to the basis of machine learning, meaning is represented in human brain through vector spaces. What he presents through the PA model (prototype-activation model) is that a specific point in a vector space represents a specific concept(a specific expression of language).

Jerry Fodor and Ernest Lepore refute this, arguing that, for example, neurons must have the same causal dependence in order to express people's understanding of similar or identical meanings. Therefore, the claim that the same concept must occupy the same position in vector space is an argument that is based on an argument that is probabilistically impossible.


Churchland says that this is established through structural isomorphism rather than location in vector space. Churchland's argument is confirmed in the principles of LLM (Large Language Model), which we have now easily encountered.


Human neurons and artificial intelligence are based on quite different elements. One synapse contains hundreds of synaptic vesicles and receptors. This is different from simply multi-channel or parallel transistors. This is because synapses show changes depending on the amount of neurotransmitter secreted, which means that the ‘performance’ of the synapse is variable. If a transistor produces a different output due to past cases, it is naturally a defective product. However, synaptic variability is accepted as a very important factor for animals to respond appropriately to situations. In other words, it does not simply transmit signals, but operates in a multidimensional way for the animal to learn (Dae-yeol Lee).

The nervous system is based on neurons reacting with movement when information about stimuli entering sensory receptors is input. According to Gaspar Jekely's theory, it is not effective for organisms to control movement through chemical methods in the body due to the slow reaction speed, and adjacent sensory cells and adjacent motor cells come together to form the elongation area of the sensory cells. It was aimed at rapid communication through electric signals. Detlev Arendt's research team at EMBL (European Institute for Life Sciences) announced that Neurons fundamentally emerged for communication between sensory cells and motor cells.

If machine learning is based on 'input data', the nervous system of animals developed based on external stimulation, that is, communication with the outside world. In my opinion, it is very different from each other.

If we apply the broad imagery of prototype theory, the roots of human 'meaning' are based not on intellectual elements that developed much later, but on a kind of mental counterpart to the outside world.

In other words, we can cautiously make a bold guess that 'knowledge' and 'meaning' can be obtained from 'feeling'.



bottom of page