What is Bi

What is Bi ?

When you use programming languages you depend on the symbols of the language to have a specific meaning, very rarely it is the case for a programming language to allow fuzzy and context based meaning, because it complicates the implementation. In natural languages concepts/symbols are both discrete and fuzzy at the same time.

To make programming languages more natural we have to embrace this dichotomy… one way to make the symbols behave this way is to represent symbols and/or context as vectors to achieve fuzziness, but still preserve discreteness.

Something like having the whole cake, but eating it too.

  • That is why Bi is build on top of the so called VSA (Vector Symbolic Architecture).

As the name implies the symbols are not your run of the mill symbols, but rather vector based i.e. symbolic system based on vector-distributed symbols, instead of traditional approach of using discrete-symbols (where symbols represent entities in all-or-none fashion). VSA approach allows symbols to be compared for similarity.

From practical programming standpoint what you will be able to do with Bi is two fold.

You can just use the basic VSA capability of the package and build your own symbolic system using Python. Or simply play with the system.

Or use the full framework that mimics Prolog system in its outward behavior. There were many reasons I started with this type of symbolic system. As you would see it is very natural extension to VSA.

Probably Lisp system will feel at home too. In this tutorial I’m giving you at least the basic tools and theoretical background needed for you to start experimenting. Don’t be scared once we are finished you will be surprised how easy and natural it is.

At the moment I’m not so much interested of the pure logic capabilities of the Prolog-systems, but more about their symbol manipulation and relational characteristic. I hope to be able to integrate also in the near future a Production system and later Reinforcement learning as seamless part of the system. (I have some ideas floating in my mind, how to do it in coherent way. Lets hope they work ;).

One side benefit of going on this journey with me is that you will also learn how to build your own Prolog interpreter, so you get two for one.

What is the goal of the Bi project ?

The general idea is to create hybrid but coherent system integrating ideas from Cognitive and Computer science. As I’m continuing to experiment with different technologies my initial goal morphed and changed somewhat. I’m no longer solely interested in pure Brain approach.

There are already many other projects with much more talented people which explore those domains. Now I’m mostly “stealing” their ideas and integrating them in my own Frankenstein project ;).

The same goes for Neuro Networks, the amount of money and people involved is staggering and hoping to make a genuine discovery there if you are not superb math geek is close to none. Again if I can steal some ideas thats fine.

On the other hand there are several critical areas left open for exploration, namely bridging the gap between Connectionism and Symbolism, Vector based symbolism, mix of different processing techniques Logic, Production systems, General symbol manipulation (lambda calculus, unification, …), which is the avenue I’m taking with Bi.

Before we start, lets explore some projects and ideas which influenced me, which you probably never heard about, You being a NN aficionado :)

Numenta Grok

I have already written about Numenta project here. There you would also find a word or two about Encoders and Spoolers/Mappers, which we will need in the More CUPs part of this tutorial.

Nengo/NEF

Nengo is the biggest functional implementation of the brain so far. Here the building block is the Neuron “simulator” which behaves like a natural neuron i.e. accepts spike trains as inputs and generates spike trains as output. Internally the signal is decoded as scalar, vector or tensor. Then any function is applied, after which the output is encoded as spike train again.

In principle you never simulate single neuron, but combine them in a group. Because a neuron is implemented as linear equation the more neurons you group the better approximation of the desired transformation of the signal you get and also more complex functions can be implemented.

You would think that because I mentioned linear equations this approach is yet another NN, far from it NEF approach so much resemble inner working of the the brain, that they can test drug influence on performance of psychological tests.

On top of that you can change the neuron-simulator with more complex ones to create more detailed and believable simulations, of course you would then need more computing resources. On the next level of abstraction NEF uses the so called Semantic Pointer (SP), so that you can start doing symbolic manipulations.

NEF algorithm

ACT-R and SOAR

I’ve never tested either ACT-R or SOAR, only red about them, but here is the gist.

Architectually they resemble BRAIN-CPU if I can be facetious. Both are Production systems.

A production system is a bunch of IF-THEN rules which compete for execution. Not to be confused with LOGIC systems like Prolog.

Production systems are heavily state based and use working-memory to hold state and control actions. SOAR uses one complex Working memory. ACT-R uses multiple simpler buffers, which are also used to interface with other subsystems.

Of the two ACT-R is more brain like. Have a distinct Declarative (fact and “frozen-rules”) and Procedural part (active Production system).

Cognitive science

Nobody knows what Cognitive science is ! But in general it is a mix of Neuro science, Psychology, Computer science and several other sciences.

What interest me, is that it explores topics like figuring the mental-sub-systems that builds the Brain, understanding how human memory works, exploring topics like Induction, Analogy, Abstraction, Logic … and their possible implementation and modeling.

Connectionism vs Symbolism

Probably the hottest debate in the AI nowadays, is how the brain compute ? Is it AI Neuro Network or mental Symbolic manipulation ? Why not both ?

In the 80’s the predominant view was that it was Symbolic, with the advance of Deep NN the view shifted the other way around so much so that today probably 95% of research tend to go towards Connectionism.

I’m sympathizer of the Symbolic approach with a caveat.

For example : Convolution layers generates features ! Isn’t feature another name for symbol ? Of course it is ! Yes you can pass them to the next layer via connections, but you can also extract the features and do symbol manipulations on them.

Symbolism integrated with Connectionism allows algebraic and logical operations to be performed on distributed patterns, which provides capabilities to model cognitive processes.

My working hypothesis is that the basic preconditions for high cognition are :

- using symbols
- using variables : placeholders for symbols
- can do operations over variables : like we do f.e. in Algebra
- can create structures : composed of symbols in different ways

Because VSA is vector based this provides grounding and natural way to build composite structures.