LINK is a prototypical model for Machine Cognition by a "Hyper Symbolic" approach with following properties:
This system differentiate itself from conventional AI models by the following:
Machine Cognition (MC) describes a system which is governed by cognitive dynamics; how a system comes by and uses cognition.
It is important to distinguish this technology from other established ones like LLM, ANN, Transformer, NLP etc. as well as the statistical ML approach in general. MC is not statistical, it is not "sub-symbolic", it demands no Pre-Processing, no Data Labeling and is not a Blackbox in its operation.
MC is deterministic and therefore fully transparent. Its operation is entiraly symbol based and therefore fully understandable by humans (and other processes).
This makes MC a complementary upgrade for generative AIs which lack mentality because of the absence of this cognitive layer.
The problem of intelligence is seen as the problem of causal understanding and reason based thinking, which can be translated to: "how is abstraction and logic learned and applied automatically and autonomously?".
This leads to a system which has agency (knows what to do and what to consider), efficiency (running on quantitative and qualitative low data) and transparency (being able to inspect and expose every step in that process).
LINK is inspired by an "old fashioned symbolic AI" methodology (GOFAI); it handles symbols (pattern) and is able to manipulate them as well as "symbolise" this manipulation yet again.
This way it is able to transform information effectively and in an abstract fashion and comes up with adapted, unique pattern which can be seen as creativity and innovation.
LINK tries to mimic human intelligence in all its cognitive manner; to be able to behave intelligently with limited and fuzzy information.
LINK is text based and is able to solve cognitive tasks by its own with little data and that reliably because of its deterministic nature. It can abstract and use these abstraction to generate adaptive behaviour and reasoning to completely new and unique situation without extensive training beforehand.
Furthermore it will be able to contemplate situations mentally, so it could reflect the past and plan for the future.
To verify this, the following four tests were devised:
| # | ID | Description | Status |
|---|---|---|---|
| 1 | ODYSSEUS | Elementary learning and application of pattern | PASSED |
| 2 | ARISTOTLE | Recognition of principles and use them to enable adaptive behavior | PASSED |
| 3 | HERCULES | Compensation of missing information by explicit remembering | PASSED |
| 4 | ZEUS | Recursive mental integration of information | PENDING |
The final test, Zeus, is currently under development and intense testing, but already passed some of its demanding requirements. The goal is a self-sufficiant, automonous model, which operates in a "long running" fashion on any conventional device.
LINK is designed and developed to serve in any field where cognition is demanded in a symbol based (factual) environment. It can be of assistence as a cognitive aid, conversational partner for idea exploration and driver for innovative thinking. Of course, the more specific the model is trained the more it can achieve.
Being complementary to generative AI it can synergize with it to complete the automation landscape further.
Theoretically none, if enough memory and processing time is available for the provided training data. Any conventional computer can run it in reasonable time - no GPU or other special hardware is needed!
The most limiting factor is currently the available memory space in which the model data is held and the speed of the algorithm to access and manipulate this data. We work on that to make it less dependend on that.
LINK is a prototype but fully functional regarding its test situation, so it won't take long to create a product out of it. Every test mentioned above is providing a ready-to-use functionality.
The benchmark for a cognitive model is different then for generative models. In short, the benchmark ist human cognition.!
Detailed performance metrics will be provided soon. Currently the 4 tests combined run within a second and generate less than 100KB of memory.
Firstly, as mentioned above, LINK is NO generative AI, that means its purpose is NOT to generate content per se. Secondly, we do not have the resources yet to enable such a "sandbox" environment to play with it. Soon there will be examples of training scenarios available which show how the interaction with LINK may look like, but for now you can imagine it like chatting with a human which has emphasis on logical thinking and principle understanding.
These are End-to-end test cases, which means that the model is completely empty before the test runs and everything it is trained on is therefore provided by the input - no pre-processing and no pre-training. Soon we will provide more test cases to show its full potential.
PURPOSE: Applied logic
NOTE: What is not there
INPUT:
COLUMBO is SMART
ELVIS is MUSICAL
HERCULES is STRONG
look he is so SMART
hey COLUMBO
look he is so MUSICAL
hey ELVIS
look he is so STRONG
OUTPUT:
hey HERCULES
METRICS:
~270ms | ~25kb (Apple M1 Pro)
PURPOSE: Deductive Reasoning
NOTE: Notice the plural nouns!
INPUT:
one woman many women
one rose many roses
men is plural for man
women is plural for woman
geese is plural for goose
every plant is flora
a rose is a plant
so roses are flora
every bird is alive
a goose is a bird
so geese are alive
every human is mortal
a woman is a human
OUTPUT:
so women are mortal
METRICS:
~200ms | ~20kb (Apple M1 Pro)
If you are interested in this unconventional and enterprising technology, write your thoughts as email or let's discuss it at discord.