Skip to content
/ sonn Public

A private Java lab for Artificial General Intelligence

Notifications You must be signed in to change notification settings

sturex/sonn

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

97 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Self-organizing AI (RL)

This is Java lab for developing and experimenting with brand-new AI technology. The technology is fully my own and hadn't been published anywhere. Core code is non-commercial for now and will always be non-commercial in future while commercial product based on it are feasible.

How it works

The idea is based on principles of self-organization where the network structure adaptively grows by interfering between previous and current states shapshots. It has long-term memory written in network structure and short-term memory written in nodes and edges states.

The model does not use complex math formulas for a neuron or synapse behavior. All computations are performed on graph where each node acts like a gate to the input abstract flow which is divisible but equal.

✨ Priceless thousand Source Lines of code ✨

While the core logic certainly will have few SLOC the abilities are expected to be very wide and powerful.

Tags (technologies) which the project is relevant to:

Tag Description
Neural network Here are Receptor, Effector and Neuron abstractions.
Self-organization Local rules only are applied while the network grows.
Reinforcement learning No supervisor at all. True Black Box abstraction.
Structural adaptation The network adjusts its structure adapting to the input signals.
Associative memory The network use biologically inspired Reflexes for forming of Associations.
Mathless approach There is no math at all used in algorithms on network grow.

Potential applications:

Visualization

I use awesome GraphStream library for layout of the network. Visualization is aimed mostly for development and showcase purposes.

Green color is for Excitatory synapses, Red color for Inhibitory ones.

Static and Dynamic layouts can be applied as follows

List<NetworkEventsListener> listeners = List.of(
        new LayoutAdapter(new GraphStreamStaticLayout()),
        new LayoutAdapter(new GraphStreamDynamicLayout()));
Network network = new Network(listeners, 400);

Static layout

It just an adjacency matrix of underlying directed graph. The Flow (core concept expression) passes from bottom to left side of matrix. The circles are neurons aka nodes. The squares stay for edges. Static layout has an event-based methods for to react to Flow bypassing through nodes and edges - this is why different squares opaqueness and circles sizes are happened.

Alt text

Dynamic layout

Below is the basic example on how the network grows using GraphStream's out-of-the-box dynamic layout. Note, the recording shows a very simple graph layout while complex ones are totally unreadable and useless.

Everything Is AWESOME

Project state

Proof-of-concept, Single-threaded, Non-optimized, Ugly sometimes

Board

Next milestone: Reinforcement learning and associative formation are fine-tuned

Done

  • The original Self-organized Neural Network pet project rewritten from the scratch
    • Abstract Nodes and Edges, Graph draft. Check the Core package
    • Concrete Receptor, Effector, PainEffector, Neuron (aka hidden network unit) in Neural package
  • Network self-organized growing
    • ⚠️Implemented an ugly workaround solution in core - should fix it
  • Associative memory formation using bio-inspired Reflexes.
  • Different kinds of Receptor. See Network code
    • basic reception for any single Object
    • reception for set of objects bounded to specific receptor (strict or dictionary)
    • adaptive (auto-extensible, on-demand) reception
    • floating point values bucketing
  • PainEffector as core mechanism to Reinforcement Learning paradigm implementation
  • Event-driven Network visualization, see Visualization package
    • Static layout (adjacency matrix)
    • Dynamic layout (graph) supporting events from every subclass of Node
  • Real-world application samples

In progress

  • Playground and Samples for
  • Single pass Network (continuous) training
    • Supervised network constructing

ToDo

  • Wiki pages
  • Add samples covering most of Real-world applications

How to start with it

  • Clone the project. Open it in your favorite IDE.
  • Start exploring Playground package or Samples package
  • or create your own class with main method with contents as below
Random random = new Random();
Network network = new Network();

network.addListener(new LayoutAdapter(new GraphStreamStaticLayout()));
network.addListener(new LayoutAdapter(new GraphStreamDynamicLayout()));

network.addReceptor(random::nextBoolean);
network.addReceptor(random::nextBoolean);
network.addReceptor(random::nextBoolean);
network.addReceptor(random::nextBoolean);

for (int idx = 0; idx < 20; idx++) {
    network.tick();
    Thread.sleep(50);
}

Suddenly some words on AGI philosophy I profess

There is NO Intelligence in terms of condition we can measure. It's all about natural selection. Nature just eliminates all the species which do not conform to current conditions. We just observe symptoms calling it Intelligence.

Everything that can accept flow from outer environment then split it using itself inner structure and finally get it back to outer media is already intelligent. The nature of the flow is irrelevant. It can be neurotransmitters and hormones or enumerated emotions each bounded to certain Receptor implemented in Java code.

Contributing

I doubt you can realize the underlying technology with no articles describing it. So just hold tight and wait 😀

Contacts

Feel free to join to my contacts via LinkedIn or Facebook