What do all things have in common?


Is Real World Knowledge More Valuable Than Fictional Knowledge?



Here an excerpt from a short summary of a paper I am writing that provides some context to answer this question:

What Knowledge is not:

Knowledge is not very well understood so I’ll briefly point out some of the reasons why we’ve been unable to precisely define what knowledge is thus far. Humanity has made numerous attempts at defining knowledge. Plato taught that justified truth and belief are required for something to be considered knowledge.

Throughout the history of the theory of knowledge (epistemology), others have done their best to add to Plato’s work or create new or more comprehensive definitions in their attempts to ‘contain’ the meaning of meaning (knowledge). All of these efforts have failed for one reason or another.

Using truth value and ‘justification’ as a basis for knowledge or introducing broader definitions or finer classifications can only fail.

I will now provide a small set of examples of why this is so.

Truth value is only a value that knowledge may attend.

Knowledge can be true or false, justified or unjustified, because

knowledge is the meaning of meaning

What about false or fictitious knowledge? [Here’s the reason why I say no.]

Their perfectly valid structure and dynamics are ignored by classifying them as something else than what they are. Differences in culture or language even make no difference, because the objects being referred to have meaning that transcends language barriers.

Another problem is that knowledge is often thought to be primarily semantics or even ontology based. Both of these cannot be true for many reasons. In the first case (semantics):

There already exists knowledge structure and dynamics for objects we cannot or will not yet know.

The same is true for objects to which meaning has not yet been assigned, such as ideas, connections and perspectives that we’re not yet aware of or have forgotten. Their meaning is never clear until we’ve become aware of or remember them.

In the second case (ontology): collations that are fed ontological framing are necessarily bound to memory, initial conditions of some kind and/or association in terms of space, time, order, context, relation,… We build whole catalogues, dictionaries and theories about them: Triads, diads, quints, ontology charts, neural networks, semiotics and even the current research in linguistics are examples.

Even if an ontology or set of them attempts to represent intrinsic meaning, it can only do so in a descriptive ‘extrinsic’ way. An ontology, no matter how sophisticated, is incapable of generating the purpose of even its own inception, not to mention the purpose of the objects to which it corresponds.

The knowledge is not coming from the data itself, it is always coming from the observer of the data, even if that observer is an algorithm.

Therefore ontology-based semantic analysis can only produce the artefacts of knowledge, such as search results, association to other objects, ‘knowledge graphs’ like Cayley,…

Real knowledge precedes, transcends and includes our conceptions, cognitive processes, perception, communication, reasoning and is more than simply related to our capacity of acknowledgement.

In fact knowledge cannot even be completely systematised; it can only be interacted with using ever increasing precision.

[For those interested, my summary is found at: A Precise Definition of Knowledge – Knowledge Representation as a Means to Define the Meaning of Meaning Precisely: http://bit.ly/2pA8Y8Y

Does Knowledge Become More Accurate Over Time?

Change lies deeper in the knowledge substrate than time.

Knowledge is not necessarily coupled with time, but it can be influenced by it. It can be influenced by change of any kind: not only time.

Knowledge may exist in a moment and vanish. The incipient perspective(s) it contains may change. Or the perspective(s) that it comprises may resist change.

Also, knowledge changes with reality and vice versa.

Time requires events to influence this relationship between knowledge and reality.

Knowledge cannot be relied upon to be a more accurate expression of reality, whether time is involved or not, because the relationship between knowledge and reality is not necessarily dependent upon time, nor is there necessarily a coupling of the relationship between knowledge and reality. The relationships of ‘more’ and ‘accurate’ are also not necessarily coupled with time.

Example: Eratosthenes calculated the circumference of the Earth long before Copernicus published. The ‘common knowledge’ of the time (Copernicus knew about Eratosthenes, but the culture did not) was that the Earth was flat.

Men And Their Semantics – Turning Meaning into Legos


Semantically speaking: Does meaning structure unite languages?

This work is a dead end waiting to happen. Of course it will attract much interest, money, and perhaps even yield new insights into the commonality of language, but there’s better ways to get there.

What’s even more sad is that they, who should know better, will see my intentions in making this clear as destructive criticism instead of a siren warning regarding research governed/originating through a false paradigm. These people cannot see or overlook the costs humanity pays for the misunderstandings research like this causes and is based upon.

It’s even worse in the field of genetic engineering with their chimera research. The people wasting public money funding this research need to be gotten under control again.

I don’t want to criticize the researcher’s intentions. It’s their framing and methodology that I see as primitive, naive, and incomplete.

I’m not judging who they are nor their ends; rather, their means of getting there.

“Quantification” is exactly the wrong way to ‘measure/compare semantics; not to mention “partitioning” them!

1) The value in this investigation that they propose is to extrapolate and interpolate ontology. Semantics are more than ontology. They possess a complete metaphysics which includes their epistemology.

2) You cannot quantify qualities, because you reduce the investigation to measurement; which itself imposes meaning upon the meaning you wish to measure. Semantics, in their true form, are relations and are non-physical and non-reducible.

3) Notice also, partitioning is imposed upon the semantics (to make them ‘measurable/comparable’). If you compare semantics in such a way then you only get answers in terms of your investigation/ontology.

4) The better way is to leave the semantics as they are! Don’t classify them! Learn how they are related. Then you will know how they are compared.

There’s more to say, but I think you get the idea… ask me if you want clarification…

Typical Knowledge Acquisitions Node

Knowledge Representation

A typical knowledge acquisition node showing two layers of abstraction. Note how some of the acquisition field detection moves with the observer’s perspective. You can tell, due to the varying visual aspects of the fields and their conjunctions that it has already been primed and in use.

This node may be one of thousands/millions/billions which form when acquiring the semantics of any particular signal set.

Their purpose is to encode a waveform of meaning.

Basically it is these ‘guys’ which do the work of ‘digesting’ the knowledge contained within any given signal; sort of like what enzymes do in our cells.

The size, colour (although not here represented), orientation, quantity, sequence, and other attributes of the constituent field representations all contribute to a unique representation of those semantics the given node has encountered along its travel through any particular set of signal. The knowledge representation (not seen here) is comprised of the results of what these nodes do.

This node represents a unique cumulative ‘imprint’ or signature derived from the group of knowledge molecules it has processed during its life time in the collation similar to what a checksum does in a more or less primitive fashion for numerical values in IT applications.

I have randomized/obfuscated a bit here (in a few different ways), as usual, so that I can protect my work and release it in a prescribed and measured way over time.

In April I will be entering the 7th year of working on this phase of my work. I didn’t intentionally plan it this way, but the number 7 does seem to be a ‘number of completion’ for me as well.

The shape of the model was not intended in itself. It ‘acquired’ this shape during the course of its work. It could have just as well been of a different type (which I’m going to show here soon).

Important is the ‘complementarity’ of the two shapes as they are capable of encoding differing levels of abstraction. The inner model is more influenced by the observer than the outer one, for example. The outer shape contains a sort of ‘summary’ of what the inner shape has processed.

Complexity At the Cost of Being Simple

Computational ComplexityComplexity At the Cost of Being Simple
There are grievous problems with complexity ‘science’. Some of those problems are apparent here. I will note a few of them.

Reductionism at @13:00 is completely annoying. Epiphenomenological aspects of the problem are completely missing when you reduce into pure binary! It’s like taking you and your emotional life (with its incipient impact on your immune system) and reducing it down to DNA!

“There are way more problems than there are solutions.” @17:00!Sure! When you peel away the contextual embedding of any problem (via reductionism), then you’ve just committed a sort of lobotomy!

The definition of NP at @23:00 while correct, reveals how misguided this theory is. Not all choices are guesses, and correct answers aren’t always ‘lucky’.

Check out the response one receives from the system (algorithm) at @25:11.Did you notice something’s wrong or what?

@26:51 Does anyone notice who is supplying the criterion for the value of ‘correct’? The algorithm is being falsely attributed with properties it can only be endowed with and not arrive at on its own!

@30:00 The rules to Tetris are known by both (algorithm and human) however, the proof of a truth value cannot be computationally arrived at in NP, yet the proof – via a human being AND the skills necessary to ‘prove’ anything can do it in P! It should be obvious that we are going about the whole thing in the wrong way by now!

@31:00 the P<>NP Problem is described. The problem is meaningless and yet you’ll get a Millenium Prize for solving it! (Even sane and not sane find themselves in the balance! Whoa!) If you continue listening to the justification, you might want to be near a bathroom.

@32:27 Check out how NP is being determined to be ‘more’ than P! “Nobody in their right mind…”, “Obviously insane…”,… so naturally NP must be more than P!
Sounds reasonable? I don’t think so…

@32:37 Watch the disappointment: “…very annoying…” and I wonder why? The question is meaningless! Other phrasings of the P<>NP Problem are nothing special and are completely obvious: “You can’t engineer luck.” (Excuse me, but isn’t that the definition of luck in the first place?) and “Solving problems is harder than checking them.”

@34:17 “What could we possibly say… this is all kind of weired…” I don’t know anymore either and I sure hope you don’t tell me! Are we at the end of the lecture already?

@35:53 Now we are getting to the ‘meat of the potato’. If we just “believe in… have faith in…” P<>NP, then Tetris is within NP-P! Wait a minute? That doesn’t sound like any proof to me… perhaps it’s an axiom? We’ll see. It sure looks like begging the question, but I want to be convinced so I’ll just have to wait.

@36:43 He then moves on to a ‘proof’ that looks more like a set of definitions! NP-hard and NP-complete are correctly defined, but they do not prove anything! Tetris and chess act like a definitions, as well!

@40:33 Now he wants to talk about reductions. Wait, weren’t we talking about them already? Let’s take a look…

Yes, we stand upon giants [Authoritarianism]@46:15(Karp’s 3-Partition) and don’t need to think about it anymore and just reconfirm that all NP-complete is reducible to each other! You find some problem that was defined by a “giant” to be a member of your classification and then show that yours is at least as hard @48:47.

If we happen to find a better solution to a member of NP-complete, then either the whole house of cards falls down or we simply reclassify (by reduction) it to P! Now believe it or believe what you want, okay?

There will be a time when we have to revisit mathematics and do a house cleaning of this ‘cuddle muddle’.

Good News! It’s Not Just Particles! It’s Properties and Patterns of Particles! – Max Tegmark

Max Tegmark - Cosmic Explorer“Consciousness is a mathematical pattern.”

Is it possible to explain the phenomenon of purpose away with another phenomenon of emergence?
I wonder how he defines purpose itself?
Isn’t consciousness more than our senses?
Why are we only looking at states of matter and leave out stages, lines, levels, types,…?
Who is doing the “feeling” he’s describing?
Who gives the particles their work to do?
How are the particles different between dead and living beings?
So we are to replace our questions with a certainty of the phenomenon of consciousness and then explain that in terms of an interpretation of same?

I’m not a religious person, but the video is starting to sound like I should be one!
Is this what we get when a physicist tries to do philosophy? Oh my!

David Chalmers – Consciousness is Fundamental, Consciousness is Universal


Consciousness Is Fundamental (@09:27)
Consciousness Is Universal (@11:35)

Modern philosophy that likely is true!
I was expecting that ill feeling I get listening to modern philosophers and it started out that way, but suddenly!!! It changed!

He’s talking about my work!
Now, if he can free himself from the brain-based paradigms of consciousness…