Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Shared data source between AgentSensorSubroutines and AgentSubroutines #5

Open
jesselawson opened this issue Apr 12, 2019 · 1 comment
Labels
enhancement New feature or request

Comments

@jesselawson
Copy link
Owner

AgentSensorSubroutines and AgentSubroutines both require access to a shared data store, preferably a document-based (i.e. nosql) store so that we don't have to continuously amend table schemas.

Example

Let's say we have a sensor subroutine AgentSensorSubroutineFeel, and in this subroutine, it checks neighboring cells for something it can feel. If it finds something, it will indicate that there is something feelable there and indicate its qualities. For example, it might post something like this to the agent's brain:

feel_result = the_entity.get_features('feel')
agent.remember(feel_result)

The entity's get_features() function may return something in the form of an array of arrays, each subarray containing a [<thing>, <key>, <value>] like this:

# Given 'feel' as the requested feature, if this entity contains any features that
# are identified by 'feel', return them
[
  ['plant03', 'feel', 'pokey'],
  ['plant03', 'feel', 'wet'],
  ['plant03', 'feel', 'hard']
]

Now the brain for this agent will have three Experiential Associations for plant03's feel feature (which is only knowable now because we have an AgentSensorSubroutine for feel). When it comes time to process sensory data for this agent during a given step, a subroutine can now 'recall' any associations between any 'plant' thing, and specifically any 'plant03'.

Organizing sensory input this way--thing, key, value--doesn't give an agent any future abilities to generalize about certain things. For example, let's say that an agent has a wealth of associations about plants 01-09. Can it make a generalization about plants based on what it knows about plant01, plant02, etc?

Perhaps the way to create a 'production' model about a plant is to take into account all the associations about plantNNs and form a plant model?

As an example, let's say I know that plant01 feels wet and pokey, plant02 feels wet and soft, and plant03 feels dry and soft. Now let's say I am hungry and I decide, not having any associations with plants and edibility, to try to eat plant01 and get the following returned:

['plant01', 'taste', 'sour'],
['plant01', 'taste', 'chalky'],
['plant01', 'taste', 'bitter']

Given this new information, my AgentSubroutineEat subroutine should read this sensor data and create new associations based off them for the Eat subroutine (since taste and consumption are closely linked), like (pseudo code):

  1. Check what cell I am on. Is there food there? [Yes, a plant01]
  2. Get all known associations for plant01. If there are none, go to 3. If there are any 'edible' features, is plant01 edible? Eat it. If not, don't eat it.
  3. If no known edible feature associations exist for plant01, what features exist for any plant where edible = True? If none exist, go to 4. If there are edible plantNNs, should we try to eat plant01 based only on the information we know? If yes, eat it and learn a new association. If NO, don't learn anything new--since we haven't tried to eat it.
  4. No known edible plants exist at all. What about anything where 'taste' is sour, chalky, and bitter? Are those edible?

In this way, the ability to generalize is an innate feature of having compiled sensory data in a way that can be mined by a subroutine.

@jesselawson jesselawson added the enhancement New feature or request label Apr 12, 2019
@jesselawson
Copy link
Owner Author

Could we use a relational db for the brain?

association table
source string
thing string
key string
value string

Each agent gets its own association table
i.e., "

ex

self,plant01,taste,sour
self,plant01,taste,bitter
self,plant01,feel,pokey
self,plant01,smell,bad
agent02,plant01,edible,yes

Using the above, we don't know if plant01 is edible; we need to create a Hypothetical Association (HA):

ha,plant01,edible,yes
or
ha,plant01,edible,no

We would create one EA model based on 'self' sources to make a weighted prediction,
then a LA model using the agent02 (and any other) sources to make a weighted LA prediction.

We would compare the EA and LA predictions (weighted, of course) and then make a determination.

Thought Process for New Encounter

I am on a cell with a Thing. Is this thing edible?
What kind of thing is it? plant01.

Do I know anything about this plant01? Select * from my_associations_table where thing = 'plant01'

If there are no results:
Do I know anything about plants in general? Select * from my_associations_table where thing like 'plant%'
If there are no results: ...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant