Typical Knowledge Acquisitions Node
A typical knowledge acquisition node showing two layers of abstraction. Note how some of the acquisition field detection moves with the observer’s perspective. You can tell, due to the varying visual aspects of the fields and their conjunctions that it has already been primed and in use.
This node may be one of thousands/millions/billions which form when acquiring the semantics of any particular signal set.
Their purpose is to encode a waveform of meaning.
Basically it is these ‘guys’ which do the work of ‘digesting’ the knowledge contained within any given signal; sort of like what enzymes do in our cells.
The size, colour (although not here represented), orientation, quantity, sequence, and other attributes of the constituent field representations all contribute to a unique representation of those semantics the given node has encountered along its travel through any particular set of signal. The knowledge representation (not seen here) is comprised of the results of what these nodes do.
This node represents a unique cumulative ‘imprint’ or signature derived from the group of knowledge molecules it has processed during its life time in the collation similar to what a checksum does in a more or less primitive fashion for numerical values in IT applications.
I have randomized/obfuscated a bit here (in a few different ways), as usual, so that I can protect my work and release it in a prescribed and measured way over time.
In April I will be entering the 7th year of working on this phase of my work. I didn’t intentionally plan it this way, but the number 7 does seem to be a ‘number of completion’ for me as well.
The shape of the model was not intended in itself. It ‘acquired’ this shape during the course of its work. It could have just as well been of a different type (which I’m going to show here soon).
Important is the ‘complementarity’ of the two shapes as they are capable of encoding differing levels of abstraction. The inner model is more influenced by the observer than the outer one, for example. The outer shape contains a sort of ‘summary’ of what the inner shape has processed.
This entry was posted on Jan 4, 2016 by heurist. It was filed under Big Data, BigData, Consciousness, Fields, Holons, Holors, Knowledge, Knowledge Representation, Language, Learning, Linguistics, Mathesis Universalis, Semantics, Wisdom and was tagged with insight, knowledge, learning, Mathesis Universalis, Metaphysics, understanding, wisdom.
This site uses Akismet to reduce spam. Learn how your comment data is processed.