What do all things have in common?


Strictly Speaking Can’t! Natural Language Won’t?

Werner Heisenberg - on Language of Mathematics

Physics is only complex, because it’s in someone’s interest to have it that way. The way to understanding, even if you don’t understand science, was paved with words. Even if those words led only to a symbolic form of understanding.

Common ordinary language is quite capable of explaining physics. Mathematics is simply more precise than common language. Modern Mathematics pays the price for that precision by being overly complex and subservient to causal and compositional relations. These are limitations that metaphysics and philosophy do not have.

Words in language have a structure that mathematics alone will never see as it looks for their structure and dynamics in the wrong places and in the wrong ways. Modern pure mathematics lacks an underlying expression of inherent purpose in its ‘tool set’.

With natural language we are even able to cross the ‘event horizon’ into interiority (where unity makes its journey through the non-dual into the causal realm). It is a place where mathematics may also ‘visit’ and investigate, but only with some metaphysical foundation to navigate with. The ‘landscape’ is very different there… where even time and space ‘behave’ (manifest) differently. Yet common language can take us there! Why? It’s made of the ‘right stuff’!

The mono-logical gaze with its incipient ontological foundation, as found in (modern) pure mathematics, is too myopic. That’s why languages such as Category Theory, although subtle and general in nature, even lose their way. They can tell us how we got there, but none can tell us why we wanted to get there in the first place!

It’s easy to expose modern corporate science’s (mainstream) limitations with this limited tool set – you need simply ask questions like: “What in my methodology inherently expresses why am I looking in here?” (what purpose) or “What assumptions am I making that I’m not even aware of?” or “Why does it choose to do that? and you’re already there where ontology falls flat on its face.

Even questions like these are met with disdain, intolerance and ridicule (the shadow knows it can’t see them and wills to banish what it cannot)! And that’s where science begins to resemble religion (psyence).

Those are also some of the reasons why philosophers and philosophy have almost disappeared from the mainstream. I’ll give you a few philosophical hints to pique your interest.

Why do they call it Chaos Theory and not Cosmos Theory?
Why coincidence and not synchronicity?
Why entropy and not centropy?

Why particle and not field?
(many more examples…)

Typical Knowledge Acquisitions Node

Knowledge Representation

A typical knowledge acquisition node showing two layers of abstraction. Note how some of the acquisition field detection moves with the observer’s perspective. You can tell, due to the varying visual aspects of the fields and their conjunctions that it has already been primed and in use.

This node may be one of thousands/millions/billions which form when acquiring the semantics of any particular signal set.

Their purpose is to encode a waveform of meaning.

Basically it is these ‘guys’ which do the work of ‘digesting’ the knowledge contained within any given signal; sort of like what enzymes do in our cells.

The size, colour (although not here represented), orientation, quantity, sequence, and other attributes of the constituent field representations all contribute to a unique representation of those semantics the given node has encountered along its travel through any particular set of signal. The knowledge representation (not seen here) is comprised of the results of what these nodes do.

This node represents a unique cumulative ‘imprint’ or signature derived from the group of knowledge molecules it has processed during its life time in the collation similar to what a checksum does in a more or less primitive fashion for numerical values in IT applications.

I have randomized/obfuscated a bit here (in a few different ways), as usual, so that I can protect my work and release it in a prescribed and measured way over time.

In April I will be entering the 7th year of working on this phase of my work. I didn’t intentionally plan it this way, but the number 7 does seem to be a ‘number of completion’ for me as well.

The shape of the model was not intended in itself. It ‘acquired’ this shape during the course of its work. It could have just as well been of a different type (which I’m going to show here soon).

Important is the ‘complementarity’ of the two shapes as they are capable of encoding differing levels of abstraction. The inner model is more influenced by the observer than the outer one, for example. The outer shape contains a sort of ‘summary’ of what the inner shape has processed.

Precursors Of Knowledge

Precursors of KnowledgePrecursors Of Knowledge
Fractal fields provide a nice framework in which to think about knowledge. They are not all we need for precision, but they are helpful in a generic way. I’ll be posting more on them as the knowledge representations are published, because there are many ‘gaps to fill’ to show how these relate to knowledge.

More sources: