ADAM: Abduction to Demonstrate an Articulate Machine

ADAM is a demonstration of “grounded language acquisition,” which is to say learning (some amount of) language from observing how language is used in concrete situations, like infants (presumably) do.

In the next section, we present an overview of the system architecture. That is followed by low-level API documentation. For installation instructions and information about contributing, please see the Markdown files in the project’s Git repository.

ADAM is ISI’s effort under DARPA’s Grounded Artificial Intelligence Language Acquisition (GAILA) program. Background for the GAILA program is given in DARPA’s call for proposals and here is a video of a talk giving an overview of our plans for ADAM (aimed at an audience familiar with the GAILA program).

Introduction

The goal of ADAM is to build a language learning module which can

  • learn (parts of) human language by observing a sequence of situations paired with situationally-appropriate language (usually a description of the situation).

  • be presented with novel situations and give reasonable linguistic descriptions of them.

For example, if the learner has been shown instances of various objects on a table and has also been shown instances of a toy truck, then when given the situation of a toy truck on a table, it should be able to describe it as truck on a table even if it has never seen that particular situation before.

A few aspects of ADAM’s approach worth noting are:

  • there are several ways one could represent a “situation.” Rather than using, for example, videos of real-life situations, we represent situations by a data structure in the computer’s memory.

  • it is implausible for the learner to observe the data structure representing the situation directly. Instead, the learner will observe a perceptual representation derived from the situation and based on those perceptual primitives that there is evidence an infant has access to.

  • in the GAILA program, researchers are permitted to control the type and sequence of example situations which are given to the learner (the curriculum). Rather than specifying a curriculum manually, we provide ways to procedurally generate curricula.

System Architecture

A particular “run” of the ADAM system is described by an Experiment. Every Experiment needs to know

There are a variety of ways to specify the situations for training and testing, but this is prototypically done by generating them procedurally using a SituationTemplateProcessor. The way a Situation is presented to the TopLevelLanguageLearner is controlled by a LanguageGenerator and a PerceptualRepresentationGenerator.

The analyses to perform on results are given by DescriptionObservers.

The ADAM system’s main entry point is adam.experiment.log_experiment. For details on this see adam/experiment/README.md. In addition, there may eventually be an interactive demonstration.

Fundamental Interfaces

adam.situation

Structures for describing situations in the world at an abstracted, human-friendly level.

class adam.situation.Situation

A situation is a high-level representation of a configuration of objects, possibly including changes in the states of objects across time.

An example situation might represent a person holding a toy truck and then putting it on a table.

A curriculum is a sequence of Situations.

Situations are a high-level description intended to make it easy for human beings to specify curricula.

Situations will be transformed into pairs of PerceptualRepresentations and LinguisticDescriptions for input to a TopLevelLanguageLearner by PerceptualRepresentationGenerators and LanguageGenerators, respectively.

class adam.situation.BagOfFeaturesSituationRepresentation(features=i{})

Represents a Situation as an unstructured set of features.

For testing purposes only.

features: immutablecollections._immutableset.ImmutableSet[str]

The set of string features which describes this situation.

class adam.situation.SituationObject(ontology_node, *, axes, schema_axis_to_object_axis, properties=i{}, debug_handle=NOTHING)

An object present in some situation.

Every object must refer to an OntologyNode linking it to a type in an ontology.

Unlike most of our classes, SituationObject has id-based hashing and equality. This is because two objects with identical properties are nonetheless distinct.

SituationObjectshould not be directly instantiated. Instead use instantiate_ontology_node.

ontology_node: adam.ontology.OntologyNode

The OntologyNode specifying the type of thing this object is.

axes: adam.axes.Axes
schema_axis_to_object_axis: Mapping[adam.axis.GeonAxis, adam.axis.GeonAxis]

Provides a mapping between the axes of an ObjectStructuralSchema (which are abstract and generic - i.e. the axes of “tire”s in general) and the concrete instantiations of those axes which are stored in the axes field of this object. We need to track this information to keep the object axes in sync with the Geon axes during perceptual generation.

Note that this mapping may be empty if this situation object was not derived from an ObjectStructuralSchema.

Rather than setting this field by hand, we recommend using the static factory method from_structural_schema.

properties: immutablecollections._immutableset.ImmutableSet[adam.ontology.OntologyNode]

The OntologyNodes representing the properties this object has.

debug_handle: str
static instantiate_ontology_node(ontology_node, *, properties=i{}, debug_handle=None, ontology)

Make a SituationObject from the object type ontology_node with properties properties. The properties and ontology node must all come from ontology.

Return type

SituationObject

class adam.situation.LocatedObjectSituation(objects_to_locations=i{})

A representation of a Situation as a set of objects located at particular Points.

objects_to_locations: Mapping[adam.situation.SituationObject, adam.math_3d.Point]

A mapping of SituationObjects to Points giving their locations.

class adam.situation.Action(action_type, argument_roles_to_fillers=i{}, *, during=None, auxiliary_variable_bindings=i{})

An action.

This can be bound to SituationObject to represent actions in Situations or to TemplateObjectVariables to represent actions in situation templates.

action_type: _ActionTypeT
argument_roles_to_fillers: immutablecollections._immutablemultidict.ImmutableSetMultiDict[adam.ontology.OntologyNode, Union[_ObjectT, adam.ontology.phase1_spatial_relations.Region[_ObjectT]]]

A mapping of semantic roles (given as OntologyNodes) to their fillers.

There may be multiple fillers for the same semantic role (e.g. conjoined arguments).

during: Optional[adam.ontology.during.DuringAction[_ObjectT]]
auxiliary_variable_bindings: immutablecollections._immutabledict.ImmutableDict[adam.ontology.action_description.ActionDescriptionVariable, _ObjectT]

A mapping of action variables from action_type’s ActionDescription to the items which should fill them.

accumulate_referenced_objects(object_accumulator)
Return type

None

adam.language

Representations of the linguistic input and outputs of a TopLevelLanguageLearner.

class adam.language.LinguisticDescription(*args, **kwds)

A linguistic description of a Situation.

This, together with a PerceptualRepresentation, forms an observation that a TopLevelLanguageLearner learns from.

A trained TopLevelLanguageLearner can then generate new LinguisticDescriptions given only a PerceptualRepresentation of a Situation.

Any LinguisticDescription must minimally be able to be treated as a sequence of token strings.

span(start_index, *, end_index_exclusive)
Return type

Span

as_token_sequence()

Get this description as a tuple of token strings.

Return type

Tuple[str, …]

Returns

A tuple of token strings describing this LinguisticDescription

as_token_string(*, span=None)
Return type

str

class adam.language.TokenSequenceLinguisticDescription(tokens)

A LinguisticDescription which consists of a sequence of tokens.

tokens: Tuple[str, ...]
as_token_sequence()

Get this description as a tuple of token strings.

Return type

Tuple[str, …]

Returns

A tuple of token strings describing this LinguisticDescription

adam.perception

This module provides classes related to the perceptual primitive representation used to describe Situations from the point-of-view of TopLevelLanguageLearners.

class adam.perception.MatchMode(value)

An enumeration.

OBJECT = 'object'
NON_OBJECT = 'non_object'
class adam.perception.ObjectPerception(debug_handle, geon=None, axes=NOTHING)

The learner’s perception of a particular object.

This object pretty much just represents the object’s existence; its attributes are handled via PropertyPerceptions.

debug_handle: str

A human-readable string associated with this object.

It is for debugging use only and should not be accessed by any algorithms.

geon: Optional[adam.geon.Geon]
axes: adam.axes.Axes
class adam.perception.PerceptualRepresentationFrame

Represents a TopLevelLanguageLearner‘s perception of some Situationat a single moment.

One or more of these forms a PerceptualRepresentation.

class adam.perception.PerceptualRepresentation(frames, *, during=None)

A TopLevelLanguageLearner’s perception of a Situation as a sequence of perceptual representations of individual moments (frames).

frames: Tuple[PerceptionT, ...]

The frames making up the description of a situation.

Usually for a static situation, this will be a single frame, but there could be two or three for complex actions.

during: Optional[adam.ontology.during.DuringAction[adam.perception.ObjectPerception]]
static single_frame(perception_frame)

Convenience method for generating a PerceptualRepresentation which is a single frame.

Parameters

perception_frame (~_PerceptionT2) – a PerceptualRepresentationFrame

Return type

PerceptualRepresentation[~_PerceptionT2]

Returns

A PerceptualRepresentation wrapping the provided frame.

is_dynamic()

Does this situation represent a changing state-of-affairs?

Return type

bool

class adam.perception.PerceptualRepresentationGenerator(*args, **kwds)

A strategy for generating PerceptualRepresentations of Situation s.

This is used when constructing curricula procedurally so that humans do not need to build perceptual representations by hand.

abstract generate_perception(situation, chooser)

Generate a PerceptualRepresentation of a Situation.

Parameters
  • situation (~_SituationT) – The Situation to represent.

  • chooser (SequenceChooser) – An optional SequenceChooser to be used for any required random choices. If none is provided, an unspecified but deterministic source of “random” choice is used.

Return type

PerceptualRepresentation[~PerceptionT]

Returns

A PerceptualRepresentation of the Situation.

class adam.perception.BagOfFeaturesPerceptualRepresentationFrame(features)

Represents a TopLevelLanguageLearner’s perception of a Situation at a single moment as an unstructured set of features.

For testing purposes only.

features: immutablecollections._immutableset.ImmutableSet[str]

A set of string features describing the Situation .

class adam.perception.DummyVisualPerceptionFrame(object_perceptions=i{})

A visual representation made up of several objects represented by DummyVisualPerceptionFrame.SingleObjectPerceptions.

This is only for testing purposes.

object_perceptions: immutablecollections._immutableset.ImmutableSet[adam.perception.DummyVisualPerceptionFrame.SingleObjectPerception]

A set of perceptions of objects as DummyVisualPerceptionFrame.SingleObjectPerceptions.

class SingleObjectPerception(tag, location)

A visual representation of a Situation at a single moment as a string describing an object together with a location, with no other structure or properties.

It simply says for e.g. a truck “this looks like a truck.” and here is its (Point) location.

This is only for testing purposes.

tag: str

A simple string desription of an object

location: adam.math_3d.Point

The Point where the object is located.

class adam.perception.DummyVisualPerceptionGenerator(*args, **kwds)

Computes simple PerceptualRepresentation for a LocatedObjectSituation using the name of the type of each object plus its location to generate a DummyVisualPerceptionFrame .

generate_perception(situation, chooser)

Generate a PerceptualRepresentation of a Situation.

Parameters
Return type

PerceptualRepresentation[DummyVisualPerceptionFrame]

Returns

A PerceptualRepresentation of the Situation.

adam.learner

Interfaces for language learning code.

class adam.learner.LearningExample(perception, linguistic_description)

A PerceptualRepresentation of a situation and its LinguisticDescription that a TopLevelLanguageLearner can learn from.

perception: adam.perception.PerceptualRepresentation[PerceptionT]

The TopLevelLanguageLearner’s perception of the Situation

linguistic_description: LinguisticDescriptionT

A human-language description of the Situation

class adam.learner.TopLevelLanguageLearnerDescribeReturn(semantics_to_descriptions, *, description_to_confidence, semantics_to_feature_strs)

The descriptions are returned as a mapping from linguistic descriptions to their scores. The scores are not defined other than “higher is better.”

It is possible that the learner cannot produce a description, in which case an empty mapping is returned.

Features may not exist for all semantic nodes this is not guaranteed.

semantics_to_descriptions: Mapping[adam.semantics.SemanticNode, adam.language.LinguisticDescription]
description_to_confidence: Mapping[adam.language.LinguisticDescription, float]
semantics_to_feature_strs: Mapping[adam.semantics.SemanticNode, Sequence[str]]
class adam.learner.TopLevelLanguageLearner(*args, **kwds)

Models an infant learning language.

A TopLevelLanguageLearner learns language by observing a sequence of LearningExamples.

A TopLevelLanguageLearner can describe new situations given a PerceptualRepresentation.

abstract observe(learning_example, offset=0, *, debug_perception_graph_logger=None)

Observe a LearningExample, possibly updating internal state.

Return type

None

abstract describe(perception, *, debug_perception_graph_logger=None)

Given a PerceptualRepresentation of a situation, produce a TopLevelLanguageLearnerDescribeReturn.

Return type

TopLevelLanguageLearnerDescribeReturn

abstract log_hypotheses(log_output_path)

Log some representation of the learner’s current hypothesized semantics for words/phrases to log_output_path

Return type

None

class adam.learner.MemorizingLanguageLearner

A trivial implementation of TopLevelLanguageLearner which just memorizes situations it has seen before and cannot produce descriptions of any other situations.

If this learner observes the same PerceptualRepresentation multiple times, only the final description is memorized.

This implementation is only useful for testing.

observe(learning_example, offset=0, *, debug_perception_graph_logger=None)

Observe a LearningExample, possibly updating internal state.

Return type

None

describe(perception, *, debug_perception_graph_logger=None)

Given a PerceptualRepresentation of a situation, produce a TopLevelLanguageLearnerDescribeReturn.

Return type

TopLevelLanguageLearnerDescribeReturn

log_hypotheses(log_output_path)

Log some representation of the learner’s current hypothesized semantics for words/phrases to log_output_path

Return type

None

adam.learner.get_largest_matching_pattern(pattern, graph, *, debug_callback=None, graph_logger=None, ontology, match_ratio=None, match_mode, trim_after_match=None, allowed_matches=i{})

Helper function to return the largest matching PerceptionGraphPattern for learner from a perception pattern and graph pair.

Return type

Optional[PerceptionGraphPattern]

adam.learner.graph_without_learner(perception_graph)

Helper function to return a PerceptionGraph without a ground object and its related nodes.

Return type

PerceptionGraph

class adam.learner.ComposableLearner
abstract learn_from(language_perception_semantic_alignment, offset=0)

Learn from a LanguagePerceptionSemanticAlignment describing a situation. This may update some internal state.

Return type

None

abstract enrich_during_learning(language_perception_semantic_alignment)

Given a LanguagePerceptionSemanticAlignment wrapping a learning example, return such an updated alignment enriched with some extra semantic alignment information.

The learner may have no information to add, in which case it can simply return the alignment it was passed.

Return type

LanguagePerceptionSemanticAlignment

abstract enrich_during_description(perception_semantic_alignment)

Given a PerceptionSemanticAlignment wrapping a perception to be described, return an updated alignment enriched with some extra semantic alignment information.

The learner may have no information to add, in which case it can simply return the alignment it was passed.

Return type

PerceptionSemanticAlignment

abstract log_hypotheses(log_output_path)

Log some representation of the learner’s current hypothesized semantics for words/phrases to log_output_path.

Return type

None

abstract concepts_to_patterns()

Return a dictionary of learner’s current hypothesized semantics for words/phrases

Return type

Dict[Concept, PerceptionGraphPattern]

adam.semantics

Classes to represent semantics from the learner’s point-of-view.

Really this and HighLevelSemanticsSituation should somehow be refactored together, but it’s not worth the trouble at this point.

class adam.semantics.Concept(*args, **kwds)
debug_string: str
class adam.semantics.SyntaxSemanticsVariable(name)

A variable portion of a a SurfaceTemplate or of a learner semantic structure.

name: str
class adam.semantics.ObjectConcept(debug_string)
debug_string: str
class adam.semantics.AttributeConcept(debug_string)
debug_string: str
class adam.semantics.KindConcept(debug_string)
debug_string: str
class adam.semantics.RelationConcept(debug_string)
debug_string: str
class adam.semantics.ActionConcept(debug_string)
debug_string: str
class adam.semantics.FunctionalObjectConcept(debug_string)
debug_string: str
class adam.semantics.GenericConcept(debug_string)
debug_string: str
class adam.semantics.SemanticNode(*args, **kwds)
concept: adam.semantics.Concept
slot_fillings: immutablecollections._immutabledict.ImmutableDict[adam.semantics.SyntaxSemanticsVariable, adam.semantics.ObjectSemanticNode]
confidence: float
static for_concepts_and_arguments(concept, slots_to_fillers, confidence)
class adam.semantics.ObjectSemanticNode(concept, confidence)
concept: adam.semantics.ObjectConcept
slot_fillings: immutablecollections._immutabledict.ImmutableDict[adam.semantics.SyntaxSemanticsVariable, adam.semantics.ObjectSemanticNode]
confidence: float
class adam.semantics.AttributeSemanticNode(concept, slot_fillings, confidence)
concept: adam.semantics.AttributeConcept
slot_fillings: immutablecollections._immutabledict.ImmutableDict[adam.semantics.SyntaxSemanticsVariable, adam.semantics.ObjectSemanticNode]
confidence: float
class adam.semantics.RelationSemanticNode(concept, slot_fillings, confidence)
concept: adam.semantics.RelationConcept
slot_fillings: immutablecollections._immutabledict.ImmutableDict[adam.semantics.SyntaxSemanticsVariable, adam.semantics.ObjectSemanticNode]
confidence: float
class adam.semantics.ActionSemanticNode(concept, slot_fillings, confidence)
concept: adam.semantics.ActionConcept
slot_fillings: immutablecollections._immutabledict.ImmutableDict[adam.semantics.SyntaxSemanticsVariable, adam.semantics.ObjectSemanticNode]
confidence: float
class adam.semantics.LearnerSemantics(objects, attributes, relations, actions, functional_concept_to_object_concept=i{})

Represent’s the learner’s semantic (rather than perceptual) understanding of a situation. The learner is assumed to view the situation as a collection of objects which possess attributes, have relations to one another, and serve as the arguments of actions.

objects: immutablecollections._immutableset.ImmutableSet[adam.semantics.ObjectSemanticNode]
attributes: immutablecollections._immutableset.ImmutableSet[adam.semantics.AttributeSemanticNode]
relations: immutablecollections._immutableset.ImmutableSet[adam.semantics.RelationSemanticNode]
actions: immutablecollections._immutableset.ImmutableSet[adam.semantics.ActionSemanticNode]
functional_concept_to_object_concept: immutablecollections._immutabledict.ImmutableDict[adam.semantics.FunctionalObjectConcept, adam.semantics.ObjectConcept]
objects_to_attributes: immutablecollections._immutablemultidict.ImmutableSetMultiDict[adam.semantics.ObjectSemanticNode, adam.semantics.AttributeSemanticNode]
objects_to_relation_in_slot1: immutablecollections._immutablemultidict.ImmutableSetMultiDict[adam.semantics.ObjectSemanticNode, adam.semantics.RelationSemanticNode]
objects_to_actions: immutablecollections._immutablemultidict.ImmutableSetMultiDict[adam.semantics.ObjectSemanticNode, adam.semantics.ActionSemanticNode]
static from_nodes(semantic_nodes, *, concept_map=i{})
Return type

LearnerSemantics

adam.situation.templates

Tools for working with situation templates, which allow a human to compactly describe a large number of possible situations.

Note that we provide convenience functions like random_situation_templates and one_situation_per_template to assist in supplying Situations to Experiments.

class adam.situation.templates.SituationTemplate

A compact description for a large number of Situations.

class adam.situation.templates.SituationTemplateProcessor(*args, **kwds)

Turns a SituationTemplate into one or more Situations.

abstract generate_situations(template, *, num_instantiations=1, chooser=Factory(factory=<function RandomChooser.for_seed>, takes_self=False), include_ground=True, default_addressee_node=None)

Generates one or more Situations from a SituationTemplate.

The behavior of this method should be deterministic conditional upon an identically initialized and deterministic SequenceChooser being supplied.

Parameters
  • template (~_SituationTemplateT) – the template to instantiate.

  • default_addressee_node (Optional[OntologyNode]) – The ontology node to use as the default addressee in the scene.

  • num_instantiations (int) – the number of instantiations requested.

  • chooser (SequenceChooser) – the means of making any random selections the generator may need.

  • include_ground (bool) – If true, include the ground in the scene.

Return type

Iterable[~SituationT]

Returns

A set of instantiated Situations with size at most num_instantiations.

class adam.situation.templates.SituationTemplateObject(handle)

An object in a SituationTemplate.

For example, a particular ball or person.

handle: str

Every object has a string handle which is used to name it for debugging purposes only

class adam.situation.templates.SimpleSituationTemplate(objects, objects_to_required_properties, objects_to_ontology_types)

A minimal implementation of a situation template for objects only (no actions or relations).

It is usually easiest to create a SimpleSituationTemplate using SimpleSituationTemplate.Builder .

objects: immutablecollections._immutableset.ImmutableSet[adam.situation.templates.SituationTemplateObject]

The SituationObjects present in the Situation, e.g. {a person, a ball, a table}

objects_to_required_properties: immutablecollections._immutablemultidict.ImmutableSetMultiDict[adam.situation.templates.SituationTemplateObject, adam.ontology.OntologyNode]

A mapping of each SituationObject in the Situation to OntologyNodes it is required to have.

objects_to_ontology_types: immutablecollections._immutabledict.ImmutableDict[adam.situation.templates.SituationTemplateObject, adam.ontology.OntologyNode]

A mapping of each SituationObject in the Situation to the OntologyNodes its type must correspond to either directly or by inheritance.

class Builder

The preferred means of creating a SimpleSituationTemplate

object_variable(handle, ontology_type, properties=())

Add an object to a SimpleSituationTemplate being built.

Parameters
Return type

SituationTemplateObject

Returns

the SituationTemplateObject which was created.

build()
Return type

SimpleSituationTemplate

class adam.situation.templates.SimpleSituationTemplateProcessor(ontology)

A trivial situation template processor for testing use.

This cannot handle anything in situation templates except object variables. This object variables are instantiated with random compatible objects from the provided Ontology ; they are positioned in a line one meter apart.

generate_situations(template, *, num_instantiations=1, chooser=Factory(factory=<function RandomChooser.for_seed>, takes_self=False), include_ground=True, default_addressee_node=None)

Generates one or more Situations from a SituationTemplate.

The behavior of this method should be deterministic conditional upon an identically initialized and deterministic SequenceChooser being supplied.

Parameters
  • template (SimpleSituationTemplate) – the template to instantiate.

  • default_addressee_node (Optional[OntologyNode]) – The ontology node to use as the default addressee in the scene.

  • num_instantiations (int) – the number of instantiations requested.

  • chooser (SequenceChooser) – the means of making any random selections the generator may need.

  • include_ground (bool) – If true, include the ground in the scene.

Return type

ImmutableSet[LocatedObjectSituation]

Returns

A set of instantiated Situations with size at most num_instantiations.

adam.situation.templates.random_situation_templates(situation_templates, num_random_templates, sequence_chooser)

Get num_random_templates random (with replacement) SituationTemplates from situation_templates, where “random” choices are given by the provided SequenceChooser.

Return type

Iterable[SituationTemplate]

adam.situation.templates.one_situation_per_template(situation_templates, situation_template_processor, sequence_chooser)
Return type

Iterable[~SituationT]

adam.situation.templates.phase1_templates

Our strategy for SituationTemplates in Phase 1 of ADAM.

class adam.situation.templates.phase1_templates.TemplateObjectVariable(handle, node_selector, asserted_properties=i{})

A variable in a Phase1SituationTemplate which could be filled by any object whose OntologyNode is selected by node_selector.

asserted_properties allows you to specify what properties should be asserted for this object in generated Situations. This is for specifying properties which are not intrinsic to an object (e.g. if your object variable is constrained to be a sub-type of person you don’t need to and shouldn’t specify ANIMATE as an asserted property) and are not used to filter what object can fill this variable. For example, if you wanted to specify that whatever fills this variable, you want to make it red in this situation, you would specify RED in asserted_properties.

We provide object_variable to make creating TemplateObjectVariables more convenient.

TemplateObjectVariables with the same node selector are not equal to one another so that you can have multiple objects in a Situation which obey the same constraints.

node_selector: adam.ontology.selectors.OntologyNodeSelector
asserted_properties: immutablecollections._immutableset.ImmutableSet[Union[adam.ontology.OntologyNode, adam.situation.templates.phase1_templates.TemplatePropertyVariable]]
class adam.situation.templates.phase1_templates.TemplatePropertyVariable(handle, node_selector)

A variable in a Phase1SituationTemplate which could be filled by any property whose OntologyNode is selected by node_selector.

We provide property_variable to make creating TemplatePropertyVariables more convenient.

TemplatePropertyVariables with the same node selector are not equal to one another so that you can have multiple objects in a Situation which obey the same constraints.

node_selector: adam.ontology.selectors.OntologyNodeSelector
class adam.situation.templates.phase1_templates.TemplateActionTypeVariable(handle, node_selector)

A variable in a Phase1SituationTemplate which could be filled by any action type whose OntologyNode is selected by node_selector.

We provide action_variable to make creating TemplateActionTypeVariables more convenient.

node_selector: adam.ontology.selectors.OntologyNodeSelector
class adam.situation.templates.phase1_templates.Phase1SituationTemplate(name, salient_object_variables, *, background_object_variables=i{}, asserted_always_relations=i{}, constraining_relations=i{}, actions=i{}, syntax_hints=i{}, gazed_objects=NOTHING, before_action_relations=i{}, after_action_relations=i{})

The SituationTemplate implementation used in Phase 1 of the ADAM project.

Currently, this can only be a collection of TemplateObjectVariables.

Phase1SituationTemplateGenerator will translate these to a sequence HighLevelSemanticsSituations corresponding to the Cartesian product of the possible values of the object_variables.

Beware that this can be very large if the number of object variables or the number of possible values of the variables is even moderately large.

name: str
salient_object_variables: immutablecollections._immutableset.ImmutableSet[adam.situation.templates.phase1_templates.TemplateObjectVariable]
background_object_variables: immutablecollections._immutableset.ImmutableSet[adam.situation.templates.phase1_templates.TemplateObjectVariable]
asserted_always_relations: immutablecollections._immutableset.ImmutableSet[adam.relation.Relation[adam.situation.templates.phase1_templates.TemplateObjectVariable]]

This are relations we assert to hold true in the situation. This should be used to specify additional relations which cannot be deduced from the types of the objects alone.

constraining_relations: immutablecollections._immutableset.ImmutableSet[adam.relation.Relation[adam.situation.templates.phase1_templates.TemplateObjectVariable]]

These are relations which we required to be true and are used in selecting assignments to object variables. Our ability to enforce these constraints efficiently is very limited, so don’t make them too complex or constraining!

actions: immutablecollections._immutableset.ImmutableSet[adam.situation.Action[Union[adam.ontology.OntologyNode, adam.situation.templates.phase1_templates.TemplateActionTypeVariable], adam.situation.templates.phase1_templates.TemplateObjectVariable]]
syntax_hints: immutablecollections._immutableset.ImmutableSet[str]

A temporary hack to allow control of language generation decisions using the situation template language.

See https://github.com/isi-vista/adam/issues/222 .

all_object_variables: immutablecollections._immutableset.ImmutableSet[adam.situation.templates.phase1_templates.TemplateObjectVariable]

All `TemplateObjectVariable`s in the situation, both salient and auxiliary to actions.

gazed_objects: immutablecollections._immutableset.ImmutableSet[adam.situation.templates.phase1_templates.TemplateObjectVariable]

A set of `TemplateObjectVariable`s which are the focus of the speaker. Defaults to all semantic role fillers of situation actions.

before_action_relations: immutablecollections._immutableset.ImmutableSet[adam.relation.Relation[adam.situation.templates.phase1_templates.TemplateObjectVariable]]

The relations which hold in this SituationTemplate, before, but not necessarily after, any actions which occur.

It is not necessary to state every relationship which holds in a situation. Rather this should contain the salient relationships which should be expressed in the linguistic description.

Do not specify those relations here which are implied by any actions which occur. Those are handled automatically.

after_action_relations: immutablecollections._immutableset.ImmutableSet[adam.relation.Relation[adam.situation.templates.phase1_templates.TemplateObjectVariable]]

The relations which hold in this SituationTemplate, after, but not necessarily before, any actions which occur.

It is not necessary to state every relationship which holds in a situation. Rather this should contain the salient relationships which should be expressed in the linguistic description.

Do not specify those relations here which are implied by any actions which occur. Those are handled automatically.

property has_relations
adam.situation.templates.phase1_templates.all_possible(situation_template, *, ontology, chooser, default_addressee_node=learner[is-learner])

Generator for all possible instantiations of situation_template with ontology.

Return type

Iterable[HighLevelSemanticsSituation]

adam.situation.templates.phase1_templates.sampled(situation_template, *, ontology, chooser, max_to_sample, default_addressee_node=learner[is-learner], block_multiple_of_the_same_type)

Gets max_to_sample instantiations of situation_template with ontology

Return type

Iterable[HighLevelSemanticsSituation]

adam.situation.templates.phase1_templates.fixed_assignment(situation_template, assignment, *, ontology, chooser, default_addressee_node=learner[is-learner])
Return type

Iterable[HighLevelSemanticsSituation]

adam.situation.templates.phase1_templates.object_variable(debug_handle, root_node=thing, *, required_properties=i{}, banned_properties=i{}, added_properties=i{}, banned_ontology_types=i{})

Create a TemplateObjectVariable with the specified debug_handle which can be filled by any object whose OntologyNode is a descendant of (or is exactly) root_node and which possesses all properties in required_properties.

Additionally, the template will add all properties in added_properties to the object. Use required_properties for things like “anything filling this variable should be animate.” Use added_properties for things like “whatever fills this variable, make it red.”

You can optionally specify banned_ontology_types to block this variable from being filled by those ontology types or any of their descendants.

Return type

TemplateObjectVariable

adam.situation.templates.phase1_templates.property_variable(debug_handle, root_node=property, *, with_meta_properties=i{}, banned_values=i{})

Create a TemplatePropertyVariable with the specified debug_handle which can be filled by any property whose OntologyNode is a descendant of (or is exactly) root_node and which possesses all properties in with_properties.

Return type

TemplatePropertyVariable

adam.situation.templates.phase1_templates.action_variable(debug_handle, root_node=action, *, with_subcategorization_frame=None, with_properties=i{})

Create a TemplatePropertyVariable with the specified debug_handle which can be filled by any property whose OntologyNode is a descendant of (or is exactly) root_node and which possesses all properties in with_properties.

Return type

TemplateActionTypeVariable

adam.situation.templates.phase1_templates.color_variable(debug_handle, *, required_properties=i{})

Create a TemplatePropertyVariable with the specified debug_handle which ranges over all colors in the ontology.

Return type

TemplatePropertyVariable

class adam.situation.templates.phase1_templates.TemplateVariableAssignment(object_variables_to_fillers=i{}, property_variables_to_fillers=i{}, action_variables_to_fillers=i{})

An assignment of ontology types to object and property variables in a situation.

object_variables_to_fillers: immutablecollections._immutabledict.ImmutableDict[adam.situation.templates.phase1_templates.TemplateObjectVariable, adam.ontology.OntologyNode]
property_variables_to_fillers: immutablecollections._immutabledict.ImmutableDict[adam.situation.templates.phase1_templates.TemplatePropertyVariable, adam.ontology.OntologyNode]
action_variables_to_fillers: immutablecollections._immutabledict.ImmutableDict[adam.situation.templates.phase1_templates.TemplateActionTypeVariable, adam.ontology.OntologyNode]

adam.experiment

Allows managing experimental configurations in code.

class adam.experiment.ObserversHolder(*, pre_observers=None, post_observers=None, test_observers=None)

This object holds observers states so that the observers remain in a synchronized state when pickle is used

pre_observers: Optional[Tuple[adam.experiment.observer.DescriptionObserver[[SituationT, LinguisticDescriptionT], PerceptionT]]]
post_observers: Optional[Tuple[adam.experiment.observer.DescriptionObserver[[SituationT, LinguisticDescriptionT], PerceptionT]]]
test_observers: Optional[Tuple[adam.experiment.observer.DescriptionObserver[[SituationT, LinguisticDescriptionT], PerceptionT]]]
class adam.experiment.Experiment(name, *, training_stages, learner_factory, sequence_chooser, pre_example_training_observers=(), post_example_training_observers=(), warm_up_test_instance_groups=(), test_instance_groups=(), test_observers=())

A particular experimental configuration.

An experiment specifies what data is fed to the TopLevelLanguageLearner, what TopLevelLanguageLearner is used and how it is configured, how the trained TopLevelLanguageLearner is tested, and what analysis is done on the results.

At various stages in the experiment, DescriptionObservers will be able to examine the descriptions of situations produced by the TopLevelLanguageLearner. Based on their observations, they can provide reports at the end of the experiment.

Note that Experiment objects should not be reused because components (especially the observers) may maintain state.

name: str

A human-readable description of this experiment.

training_stages: Sequence[adam.curriculum.InstanceGroup[[SituationT, LinguisticDescriptionT], PerceptionT]]

Every experiment has one or more training_stages; simple experiments will have only one, while those which reflect a curriculum which increases in complexity over time may have several. Each of these is an InstanceGroup providing triples of a Situation with the corresponding LinguisticDescription and PerceptualRepresentation. There are many ways an InstanceGroup could be specified, ranging from simply a collection of these triples (e.g. ExplicitWithSituationInstanceGroup) to a complex rule-governed process (e.g. GeneratedFromSituationsInstanceGroup).

learner_factory: Callable[adam.learner.TopLevelLanguageLearner[PerceptionT, LinguisticDescriptionT]]

A no-argument function which will return the TopLevelLanguageLearner which should be trained.

sequence_chooser: adam.random_utils.SequenceChooser

Used for making all “random” decisions by all components to ensure reproducibility.

pre_example_training_observers: Tuple[adam.experiment.observer.DescriptionObserver[[SituationT, LinguisticDescriptionT], PerceptionT]]

These DescriptionObservers are provided with the description a TopLevelLanguageLearner would give to a situation during training before it is shown the correct description.

post_example_training_observers: Tuple[adam.experiment.observer.DescriptionObserver[[SituationT, LinguisticDescriptionT], PerceptionT]]

Same as pre_example_training_observers except they receive the learner’s description after it has updated itself by seeing the correct description.

warm_up_test_instance_groups: Sequence[adam.curriculum.InstanceGroup[[Any, LinguisticDescriptionT], PerceptionT]]

May optionally be provided if at test time the TopLevelLanguageLearner needs to be shown some observations before evaluation (for example, to introduce some new objects).

test_instance_groups: Sequence[adam.curriculum.InstanceGroup[[Any, LinguisticDescriptionT], PerceptionT]]

The situations and perceptual representations the trained TopLevelLanguageLearner will be asked to describe for evaluation. These are specified by InstanceGroups, just like the training data.

test_observers: Tuple[adam.experiment.observer.DescriptionObserver[[SituationT, LinguisticDescriptionT], PerceptionT]]

These DescriptionObservers observe the descriptions given by the learner on the test instances. These are what should be used for computing evaluation metrics.

adam.experiment.execute_experiment(experiment, *, log_path=None, log_hypotheses_every_n_examples=250, learner_logging_path=None, log_learner_state=True, load_learner_state=None, resume_from_latest_logged_state=False, debug_learner_pickling=False, starting_point=0, point_to_log=0, perception_graph_logger=None)

Runs an Experiment.

Return type

None

Supporting Classes: Learner Internals

adam.learner.alignments

class adam.learner.alignments.LanguageConceptAlignment(language, node_to_language_span=i{})

Represents an alignment between a LinguisticDescription and an ImmutableSet[SemanticNode] where the nodes represent concepts.

This can be generified in the future.

node_to_language_span and language_span_to_node are both guaranteed to be sorted by the token spans.

The same span may be aligned to multiple objects (e.g. imagine a scene with multiple “balls”). However, aligned token spans will either be disjoint or identical; partially overlapping aligned token spans are not allowed.

language: adam.language.LinguisticDescription
node_to_language_span: immutablecollections._immutabledict.ImmutableDict[adam.semantics.ObjectSemanticNode, vistautils.span.Span]
language_span_to_node: immutablecollections._immutablemultidict.ImmutableSetMultiDict[vistautils.span.Span, adam.semantics.SemanticNode]
aligned_nodes: immutablecollections._immutableset.ImmutableSet[adam.semantics.SemanticNode]
aligned_token_indices: immutablecollections._immutableset.ImmutableSet[int]
is_entirely_aligned: bool
static create_unaligned(language)
Return type

LanguageConceptAlignment

copy_with_added_token_alignments(new_token_alignments)
Return type

LanguageConceptAlignment

token_index_is_aligned(token_index)
Return type

bool

copy_with_new_nodes(new_semantic_nodes_to_surface_templates, *, filter_out_duplicate_alignments, fail_if_surface_templates_do_not_match_language=True)

Get a new copy of this alignment, except with the given given semantic nodes aligned to the associated surface templates.

If fail_if_surface_templates_do_not_match_language is True, an exception will be thrown if the associated surface templates are not found in the instance’s tokens. Otherwise, no alignment will be added.

Return type

LanguageConceptAlignment

to_surface_template(object_node_to_template_variable, *, language_mode, determiner_prefix_slots=i{}, restrict_to_span=None)

Creates a SurfaceTemplate corresponding to all or some portion of this alignment.

The user specifies which semantic object nodes should have their aligned tokens replaced with wildcard slots in the template using object_node_to_template_variable. For example, if you have the language “Fred ate a sandwich” where “Fred” is aligned to object o_1 and “sandwich” is aligned to object o_2, then calling this method with { SLOT1: o_1, SLOT2: o_2} will produce the template “SLOT1 ate SLOT2”.

determiner_prefix_slots is passed along to the constructed SurfaceTemplate so it know what nodes should be prefixed with a determiner when the template is instantiated in the future. This is an English-specific hack tracked in https://github.com/isi-vista/adam/issues/498

If restrict_to_span is specified, the template will be built only from tokens of language within that span.

Return type

SurfaceTemplate

class adam.learner.alignments.PerceptionSemanticAlignment(perception_graph, semantic_nodes, functional_concept_to_object_concept=i{})

Represents an alignment between a perception graph and a set of semantic nodes representing concepts.

This is used to represent intermediate semantic data passed between new-style learners when describing a perception.

perception_graph: adam.perception.perception_graph.PerceptionGraph
semantic_nodes: immutablecollections._immutableset.ImmutableSet[adam.semantics.SemanticNode]
functional_concept_to_object_concept: immutablecollections._immutabledict.ImmutableDict[adam.semantics.FunctionalObjectConcept, adam.semantics.ObjectConcept]
static create_unaligned(perception_graph)
Return type

PerceptionSemanticAlignment

copy_with_updated_graph_and_added_nodes(*, new_graph, new_nodes)
Return type

PerceptionSemanticAlignment

copy_with_mapping(*, mapping)
Return type

PerceptionSemanticAlignment

class adam.learner.alignments.LanguagePerceptionSemanticAlignment(*, language_concept_alignment, perception_semantic_alignment)

Represents an alignment of both language and perception with some semantic nodes representing concepts.

This is used to represent intermediate semantic data passed between new-style learners when learning from an example.

language_concept_alignment: adam.learner.alignments.LanguageConceptAlignment
perception_semantic_alignment: adam.learner.alignments.PerceptionSemanticAlignment

adam.learner.surface_template

Representations of template-with-slots-like patterns over token strings.

class adam.learner.surface_templates.SurfaceTemplate(elements, language_mode, determiner_prefix_slots=i{})

A pattern over TokenSequenceLinguisticDescriptions.

Such a pattern consists of a sequence of token strings and SyntaxSemanticsVariables.

elements: Tuple[Union[str, adam.semantics.SyntaxSemanticsVariable], ...]
num_slots: int
instantiate(template_variable_to_filler, *, attribute_template=False)

Turns a template into a TokenSequenceLinguisticDescription by filling in its variables.

Return type

TokenSequenceLinguisticDescription

to_short_string()
Return type

str

static for_object_name(object_name, *, language_mode)
Return type

SurfaceTemplate

match_against_tokens(token_sequence_to_match_against, *, slots_to_filler_spans)

Gets the token indices, if any, for the first match of this template against token_sequence_to_match_against, assuming any slots are filled by the tokens given by slots_to_fillers_spans.

Return type

Optional[Span]

class adam.learner.surface_templates.SurfaceTemplateBoundToSemanticNodes(surface_template, slot_to_semantic_node)

A surface template together with a mapping from its slots to particular semantic roles.

This is used to specify what the thing we are trying to learn the meaning of in a template learner is. For example, “what does ‘X eats Y’ mean, given that we know X is this thing and Y is that other thing in this particular situation.

surface_template: adam.learner.surface_templates.SurfaceTemplate
slot_to_semantic_node: immutablecollections._immutabledict.ImmutableDict[adam.semantics.SyntaxSemanticsVariable, adam.semantics.ObjectSemanticNode]

Supporting Classes: Representing Relations

adam.relation

class adam.relation.Relation(relation_type, first_slot, second_slot, *, negated=False)

A relationship which holds between two objects or between a SituationObject and a Region. The latter case is allowed only for the special relation IN_REGION .

This is used for relations in Situations, ObjectStructuralSchemata, perceptions, etc. since they have the same structure.

relation_type: adam.ontology.OntologyNode
first_slot: _ObjectT
second_slot: Union[_ObjectT, Region[_ObjectT]]
negated: bool
negated_copy()
Return type

Relation[~_ObjectT]

copy_remapping_objects(object_mapping)
Return type

Relation[~_NewObjectT]

accumulate_referenced_objects(object_accumulator)

Adds all objects referenced by this Relation or any Regions it refers to to object_accumulator.

Return type

None

adam.relation.flatten_relations(relation_collections)

Convenience method to enable writing sub-object relations in an ObjectStructuralSchema more easily.

This method simply flattens collections of items in the input iterable.

This is useful because it allows you to write methods for your relations which produce collections of relations as their output. This allows you to use such DSL-like methods to enforce constraints between the relations.

Please see adam.ontology.phase1_ontology.PERSON_SCHEMA for an example of how this is useful.

Return type

ImmutableSet[Relation[Any]]

Supporting Classes: Representing Space

adam.geon

class adam.geon.CrossSectionSize(name)
name: str
class adam.geon.CrossSection(*, has_rotational_symmetry=False, has_reflective_symmetry=False, curved=False)
has_rotational_symmetry: bool
has_reflective_symmetry: bool
curved: bool
class adam.geon.Geon(*, cross_section, cross_section_size, axes, generating_axis=NOTHING)
cross_section: adam.geon.CrossSection
cross_section_size: adam.geon.CrossSectionSize
axes: adam.axes.Axes
generating_axis: adam.axis.GeonAxis
copy(*, axis_mapping=None, output_axis_mapping=None)

Makes an independent copy of this geon.

This will also have its own axes. This geon’s axes will be mapped using axis_mapping if specified. Otherwise, each axis will be copied. If output_axis_mapping is specified, it will be populated with the mapping between original and copied axes. This is somewhat of a hack, but the information is needed when instantiating perceivable sub-object relations from object schemata.

Return type

Geon

class adam.geon.MaybeHasGeon(*args, **kwds)
geon: Optional[adam.geon.Geon]
adam.geon.CONSTANT = constant

Indicates the size of the cross-section of a geon remains roughly constant along its generating axis.

adam.geon.SMALL_TO_LARGE = small-to-large

Indicates the size of the cross-section of a geon increases along its generating axis.

adam.geon.LARGE_TO_SMALL = large-to-small

Indicates the size of the cross-section of a geon decreases along its generating axis.

adam.geon.SMALL_TO_LARGE_TO_SMALL = small-to-large-to-small

Indicates the size of the cross-section of a geon initially increases along the generating axis, but then decreases.

adam.axes

adam.axes.directed(debug_name)
Return type

GeonAxis

adam.axes.straight_up(debug_name)
Return type

GeonAxis

adam.axes.symmetric(debug_name)
Return type

GeonAxis

adam.axes.symmetric_vertical(debug_name)
Return type

GeonAxis

class adam.axes.AxesInfo(addressee=None, axes_facing=i{})
addressee: Optional[_ObjectT]
axes_facing: immutablecollections._immutablemultidict.ImmutableSetMultiDict[_ObjectT, adam.axis.GeonAxis]
copy_remapping_objects(object_map)
Return type

AxesInfo[~_ObjectToT]

class adam.axes.AxisFunction(*args, **kwds)

A procedure for selecting a particular GeonAxis.

This is used in defining the semantics of prepositions and verbs and for defining the spatial relations between parts of an object in ObjectStructuralSchemata.

to_concrete_axis(axes_info)

Select a particular concrete axis.

This function will be provided with an AxesInfo object in concrete situations which can be used to determing the relationship of object axes to the speaker and the learner. However, this information is not available in more abstract contexts, like ObjectStructuralSchema, and the AxisFunction should throw an exception if called in such a way.

Return type

GeonAxis

copy_remapping_objects(object_map)

Copy remapping objects

Return type

AxisFunction[~_ObjectToT]

accumulate_referenced_objects(object_accumulator)

Accumulate referenced objects

Return type

None

class adam.axes.PrimaryAxisOfObject(object)
to_concrete_axis(axes_info)

Select a particular concrete axis.

This function will be provided with an AxesInfo object in concrete situations which can be used to determing the relationship of object axes to the speaker and the learner. However, this information is not available in more abstract contexts, like ObjectStructuralSchema, and the AxisFunction should throw an exception if called in such a way.

Return type

GeonAxis

copy_remapping_objects(object_map)

Copy remapping objects

Return type

PrimaryAxisOfObject[~_ObjectToT]

class adam.axes.HorizontalAxisOfObject(object, index)
to_concrete_axis(axes_info)

Select a particular concrete axis.

This function will be provided with an AxesInfo object in concrete situations which can be used to determing the relationship of object axes to the speaker and the learner. However, this information is not available in more abstract contexts, like ObjectStructuralSchema, and the AxisFunction should throw an exception if called in such a way.

Return type

GeonAxis

copy_remapping_objects(object_map)

Copy remapping objects

Return type

HorizontalAxisOfObject[~_ObjectToT]

class adam.axes.FacingAddresseeAxis(object)
to_concrete_axis(axes_info)

Select a particular concrete axis.

This function will be provided with an AxesInfo object in concrete situations which can be used to determing the relationship of object axes to the speaker and the learner. However, this information is not available in more abstract contexts, like ObjectStructuralSchema, and the AxisFunction should throw an exception if called in such a way.

Return type

GeonAxis

copy_remapping_objects(object_map)

Copy remapping objects

Return type

FacingAddresseeAxis[~_ObjectToT]

class adam.axes.Axes(*, primary_axis, orienting_axes, axis_relations=i{})
primary_axis: adam.axis.GeonAxis
orienting_axes: immutablecollections._immutableset.ImmutableSet[adam.axis.GeonAxis]
axis_relations: immutablecollections._immutableset.ImmutableSet[adam.relation.Relation[adam.axis.GeonAxis]]
gravitationally_aligned_axis: Optional[adam.axis.GeonAxis]
property all_axes: Iterable[adam.axis.GeonAxis]
Return type

Iterable[GeonAxis]

copy()

Returns a deep copy of this set of axes.

A copy is made of each contained axis. The correspondence of the copies to the previous axes can be tracked because the order of axes is maintained (so the first axis in the copy is a copy of the first axis in the original, etc.)

Return type

Axes

remap_axes(axis_mapping)
Return type

Axes

class adam.axes.HasAxes(*args, **kwds)
axes: adam.axes.Axes

adam.axis

class adam.axis.GeonAxis(debug_name, curved=False, directed=True, aligned_to_gravitational=False)
debug_name: str
curved: bool
directed: bool
aligned_to_gravitational
copy()
Return type

GeonAxis

Supporting Classes: Object Structure

adam.ontology.structural_schema

class adam.ontology.structural_schema.ObjectStructuralSchema(ontology_node, sub_objects=i{}, sub_object_relations=i{}, *, geon=None, axes=NOTHING)

A hierarchical representation of the internal structure of some type of object.

An ObjectStructuralSchema represents the general pattern of the structure of an object, rather than the structure of any particular object (e.g. people in general, rather than a particular person).

For example a person’s body is made up of a head, torso, left arm, right arm, left leg, and right leg. These sub-objects have various relations to one another (e.g. the head is above and supported by the torso).

Declaring an ObjectStructuralSchema can be verbose;

see Relations for additional tips on how to make this more compact.

ontology_node: adam.ontology.OntologyNode

The OntologyNode this ObjectStructuralSchema represents the structure of.

sub_objects: immutablecollections._immutableset.ImmutableSet[adam.ontology.structural_schema.SubObject]

The component parts which make up an object of the type parent_object.

These SubObjects themselves wrap ObjectStructuralSchemas and can therefore themselves have complex internal structure.

sub_object_relations: immutablecollections._immutableset.ImmutableSet[adam.relation.Relation[adam.ontology.structural_schema.SubObject]]

A set of Relations which define how the SubObjects relate to one another.

geon: Optional[adam.geon.Geon]
axes: adam.axes.Axes
class adam.ontology.structural_schema.SubObject(schema, *, debug_handle=None)

A sub-component of a generic type of object.

This is for use only in constructing ObjectStructuralSchemata.

schema: adam.ontology.structural_schema.ObjectStructuralSchema

The ObjectStructuralSchema describing the internal structure of this sub-component.

For example, an ARM is a sub-component of a PERSON, but ARM itself has a complex structure (e.g. it includes a hand)

debug_handle: Optional[str]

A human-readable string which should be accessed for debugging purposes only.

Supporting Classes: Action Structure

adam.ontology.during

class adam.ontology.during.DuringAction(*, objects_to_paths=i{}, at_some_point=i{}, continuously=i{})
objects_to_paths: immutablecollections._immutablemultidict.ImmutableSetMultiDict[_ObjectT, adam.ontology.phase1_spatial_relations.SpatialPath[_ObjectT]]
at_some_point: immutablecollections._immutableset.ImmutableSet[adam.relation.Relation[_ObjectT]]
continuously: immutablecollections._immutableset.ImmutableSet[adam.relation.Relation[_ObjectT]]
copy_remapping_objects(object_mapping)
Return type

DuringAction[~_NewObjectT]

accumulate_referenced_objects(object_accumulator)

Adds all objects referenced by this DuringAction to object_accumulator.

Return type

None

union(other_during)

Unify two DuringAction together.

For unifying spatial paths, the paths from self override any conflicts in other_during

Return type

DuringAction[~_ObjectT]

adam.ontology.action_description

class adam.ontology.action_description.ActionDescriptionVariable(ontology_node=None, *, properties=i{}, debug_handle=NOTHING)

A variable in an action description ranging over objects in Situations.

Unlike most of our classes, ActionDescriptionVariable has id-based hashing and equality. This is because two objects with identical properties are nonetheless distinct.

ontology_node: adam.ontology.OntologyNode

The OntologyNode specifying the type of thing this object is.

properties: immutablecollections._immutableset.ImmutableSet[adam.ontology.OntologyNode]

The OntologyNodes representing the properties this object has.

debug_handle: str
class adam.ontology.action_description.ActionDescriptionFrame(roles_to_variables=i{})
roles_to_variables: immutablecollections._immutabledict.ImmutableDict[adam.ontology.OntologyNode, adam.ontology.action_description.ActionDescriptionVariable]
variables_to_roles: immutablecollections._immutablemultidict.ImmutableSetMultiDict[adam.ontology.action_description.ActionDescriptionVariable, adam.ontology.OntologyNode]
semantic_roles: immutablecollections._immutableset.ImmutableSet[adam.ontology.OntologyNode]
class adam.ontology.action_description.ActionDescription(*, frame, during=None, enduring_conditions=i{}, preconditions=i{}, postconditions=i{}, asserted_properties=i{})
frame: adam.ontology.action_description.ActionDescriptionFrame
during: Optional[adam.ontology.during.DuringAction[adam.ontology.action_description.ActionDescriptionVariable]]
enduring_conditions: immutablecollections._immutableset.ImmutableSet[adam.relation.Relation[adam.ontology.action_description.ActionDescriptionVariable]]
preconditions: immutablecollections._immutableset.ImmutableSet[adam.relation.Relation[adam.ontology.action_description.ActionDescriptionVariable]]
postconditions: immutablecollections._immutableset.ImmutableSet[adam.relation.Relation[adam.ontology.action_description.ActionDescriptionVariable]]
asserted_properties: immutablecollections._immutablemultidict.ImmutableSetMultiDict[adam.ontology.action_description.ActionDescriptionVariable, adam.ontology.OntologyNode]
auxiliary_variables: immutablecollections._immutableset.ImmutableSet[adam.ontology.action_description.ActionDescriptionVariable]

These are variables which do not occupy semantic roles but are are still referred to by conditions, paths, etc. An example would be the container for liquid for a “drink” action.

Supporting Classes: Spatial and Size Representation

adam.ontology.phase1_spatial_relations

class adam.ontology.phase1_spatial_relations.Distance(name)

A distance of the sort used by Landau and Jackendoff to specify spatial regions.

name: str
adam.ontology.phase1_spatial_relations.INTERIOR = interior

Figure is within the ground.

adam.ontology.phase1_spatial_relations.EXTERIOR_BUT_IN_CONTACT = exterior-but-in-contact

Figure is outside the ground but contacting it.

adam.ontology.phase1_spatial_relations.PROXIMAL = proximal

Figure is “near” the ground.

adam.ontology.phase1_spatial_relations.DISTAL = distal

Figure is “far” from the ground.

adam.ontology.phase1_spatial_relations.LANDAU_AND_JACKENDOFF_DISTANCES = [interior, exterior-but-in-contact, proximal, distal]

Distances used by Landau and Jackendoff in describing spatial relations.

class adam.ontology.phase1_spatial_relations.Direction(positive, relative_to_axis)

Represents the direction one object may have relative to another.

This is used to specify Regions.

positive: bool

We need to standardize on what “positive” direction means. It is clear for vertical axes but less clear for other things.

relative_to_axis: Union[adam.axis.GeonAxis, adam.axes.AxisFunction[ReferenceObjectT]]
copy_remapping_objects(object_map, *, axis_mapping)
Return type

Direction[~NewObjectT]

relative_to_concrete_axis(axes_info)
Return type

GeonAxis

opposite()
Return type

Direction[~ReferenceObjectT]

class adam.ontology.phase1_spatial_relations.Region(reference_object, distance=None, direction=None)

A region of space perceived by the learner.

We largely follow

Barbara Landau and Ray Jackendoff. “‘What’ and ‘where’ in spatial language and spatial cognition. Brain and Behavioral Sciences (1993) 16:2.

who analyze spatial relations in term of a Distance and Direction with respect to some reference_object.

At least one of distance and direction must be specified.

reference_object: ReferenceObjectT
distance: Optional[adam.ontology.phase1_spatial_relations.Distance]
direction: Optional[adam.ontology.phase1_spatial_relations.Direction[ReferenceObjectT]]
copy_remapping_objects(object_map, *, axis_mapping=i{})
Return type

Region[~NewObjectT]

accumulate_referenced_objects(object_accumulator)

Adds all objects referenced by this Region to object_accumulator.

Return type

None

unify(other_region)

Unifies two regions together if the reference object is the same.

Return type

Region[~ReferenceObjectT]

class adam.ontology.phase1_spatial_relations.PathOperator(name)
name: str
class adam.ontology.phase1_spatial_relations.SpatialPath(operator, reference_source_object, reference_destination_object, *, reference_axis=None, orientation_changed=False, properties=i{})
operator: Optional[adam.ontology.phase1_spatial_relations.PathOperator]
reference_source_object: Union[ReferenceObjectT, adam.ontology.phase1_spatial_relations.Region[ReferenceObjectT]]
reference_destination_object: Union[ReferenceObjectT, adam.ontology.phase1_spatial_relations.Region[ReferenceObjectT]]
reference_axis: Optional[Union[adam.axis.GeonAxis, adam.axes.AxisFunction[ReferenceObjectT]]]
orientation_changed: bool
properties: immutablecollections._immutableset.ImmutableSet[adam.ontology.OntologyNode]
copy_remapping_objects(object_mapping)
Return type

SpatialPath[~NewObjectT]

accumulate_referenced_objects(object_accumulator)

Adds all objects referenced by this Region to object_accumulator.

Return type

None

unify(other_path, *, override=False)
Return type

SpatialPath[~ReferenceObjectT]

adam.ontology.phase1_size_relationships

adam.ontology.phase1_size_relationships.build_size_relationships(relative_size_nodes, *, relation_type, opposite_type)

Build a dictionary to represent opposite relation_types between OntologyNodes

Used primarily to represent relative size_of relationships, this function takes a ranking of OntologyNode objects by relative_size_nodes which are then assigned the appropriate relation_type and opposite_type respectively.

For use see GAILA_PHASE_1_ONTOLOGY.

Supporting classes: Situations

adam.situation.high_level_semantics_situation

class adam.situation.high_level_semantics_situation.HighLevelSemanticsSituation(ontology, salient_objects, *, axis_info=AxesInfo(addressee=None, axes_facing=i{}), other_objects=i{}, always_relations=i{}, before_action_relations=i{}, after_action_relations=i{}, actions=i{}, gazed_objects=NOTHING, syntax_hints=i{}, from_template=None)

A human-friendly representation of Situation.

ontology: adam.ontology.ontology.Ontology

What Ontology items from the objects, relations, and actions in this Situation will come from.

salient_objects: immutablecollections._immutableset.ImmutableSet[adam.situation.SituationObject]
axis_info: adam.axes.AxesInfo[adam.situation.SituationObject]

The salient objects present in a Situation. This will usually be the ones expressed in the linguistic form.

other_objects: immutablecollections._immutableset.ImmutableSet[adam.situation.SituationObject]

These are other objects appearing in the situation which are less important. For example, the cup holding a liquid being drunk.

These typically correspond to auxiliary variables in ActionDescriptions.

all_objects: immutablecollections._immutableset.ImmutableSet[adam.situation.SituationObject]
always_relations: immutablecollections._immutableset.ImmutableSet[adam.relation.Relation[adam.situation.SituationObject]]

The relations which hold in this Situation, both before and after any actions which occur.

It is not necessary to state every relationship which holds in a situation. Rather this should contain the salient relationships which should be expressed in the linguistic description.

Do not specify those relations here which are implied by any actions which occur. Those are handled automatically.

before_action_relations: immutablecollections._immutableset.ImmutableSet[adam.relation.Relation[adam.situation.SituationObject]]

The relations which hold in this Situation, before, but not necessarily after, any actions which occur.

It is not necessary to state every relationship which holds in a situation. Rather this should contain the salient relationships which should be expressed in the linguistic description.

Do not specify those relations here which are implied by any actions which occur. Those are handled automatically.

after_action_relations: immutablecollections._immutableset.ImmutableSet[adam.relation.Relation[adam.situation.SituationObject]]

The relations which hold in this Situation, after, but not necessarily before, any actions which occur.

It is not necessary to state every relationship which holds in a situation. Rather this should contain the salient relationships which should be expressed in the linguistic description.

Do not specify those relations here which are implied by any actions which occur. Those are handled automatically.

actions: immutablecollections._immutableset.ImmutableSet[adam.situation.Action[adam.ontology.OntologyNode, adam.situation.SituationObject]]

The actions occurring in this Situation

is_dynamic: bool

Bool representing whether the situation has any actions, i.e is dynamic.

gazed_objects: immutablecollections._immutableset.ImmutableSet[adam.situation.SituationObject]

A set of SituationObject s which are the focus of the speaker. Defaults to all semantic role fillers of situation actions.

syntax_hints: immutablecollections._immutableset.ImmutableSet[str]

A temporary hack to allow control of language generation decisions using the situation template language.

See https://github.com/isi-vista/adam/issues/222 .

from_template: Optional[str]
relation_always_holds(query_relation)
Return type

bool

Supporting classes: Ontologies

adam.ontology

Representations for simple ontologies.

These ontologies are intended to be used when describing Situations and writing SituationTemplates.

class adam.ontology.OntologyNode(handle, inheritable_properties=i{}, *, non_inheritable_properties=i{})

A node in an ontology representing some type of object, action, or relation, such as “animate object” or “transfer action.”

An OntologyNode has a handle, which is a user-facing description used for debugging and testing only.

It may also have a set of local_properties which are inherited by all child nodes.

handle: str

A simple human-readable description of this node, used for debugging and testing only.

inheritable_properties: immutablecollections._immutableset.ImmutableSet[adam.ontology.OntologyNode]

Properties of the OntologyNode, as a set of OntologyNodes which should be inherited by its children.

non_inheritable_properties: immutablecollections._immutableset.ImmutableSet[adam.ontology.OntologyNode]

Properties of the OntologyNode, as a set of OntologyNodes which should not be inherited by its children.

adam.ontology.CAN_FILL_TEMPLATE_SLOT = can-fill-template-slot

A property indicating that a node can be instantiated in a scene.

The ontology contains many nodes which, while useful for various purposes, do not themselves form part of our primary concept vocabulary. This property distinguishes the elements of our core “concept vocabulary” from such auxiliary concepts.

For example, PERSON is one of our core concepts; we have a concept of ARM which is used in defining the ObjectStructuralSchema of PERSON but disembodied arms should never be instantiated in templates directly.

adam.ontology.THING = thing

Ancestor of all objects in an Ontology.

By convention this should appear in all Ontologys.

adam.ontology.RELATION = relation

Ancestor of all relations in an Ontology.

By convention this should appear in all Ontologys.

adam.ontology.ACTION = action

Ancestor of all actions in an Ontology.

By convention this should appear in all Ontologys.

adam.ontology.PROPERTY = property

Ancestor of all properties in an Ontology.

By convention this should appear in all Ontologys.

adam.ontology.META_PROPERTY = meta-property

A property of a property.

For example, whether it is perceivable or binary.

By convention this should appear in all Ontologys.

adam.ontology.IN_REGION = in-region

Indicates that an object is located in a Region.

This is used to support the Landau and Jackendoff interpretation of prepositions.

adam.ontology.IS_SPEAKER = is-speaker[binary,perceivable]

Indicates that the marked object is the one who is speaking the linguistic description of the situation. This will not be present for all situations. It only makes sense to apply this to sub-types of PERSON, but this is not currently enforced.

adam.ontology.IS_ADDRESSEE = is-addressee[binary,perceivable]

Indicates that the marked object is the one who is addressed. This will not be present for all situations. It only makes sense to apply this to sub-types of PERSON, but this is not currently enforced. E.g. ‘You put the ball on the table.’

adam.ontology.minimal_ontology_graph()

Get the NetworkX DiGraph corresponding to the minimum legal ontology, containing all required nodes.

This is useful as a convenient foundation for building your own ontologies.

adam.ontology.ontology

class adam.ontology.ontology.Ontology(name, graph, structural_schemata=i{}, *, action_to_description=i{}, relations=i{})

A hierarchical collection of types for objects, actions, etc.

Types are represented by OntologyNodes with parent-child relationships.

Every OntologyNode may have a set of properties which are inherited by all child nodes.

Every Ontology must contain the special nodes THING, RELATION, ACTION, PROPERTY, META_PROPERTY, and CAN_FILL_TEMPLATE_SLOT.

An Ontology must have an ObjectStructuralSchema associated with each CAN_FILL_TEMPLATE_SLOT THING. ObjectStructuralSchemata are inherited, but any which are explicitly-specified will cause any inherited schemata to be ignored.

To assist in creating legal Ontologys, we provide minimal_ontology_graph.

action_to_description: immutablecollections._immutablemultidict.ImmutableSetMultiDict[adam.ontology.OntologyNode, adam.ontology.action_description.ActionDescription]
relations: immutablecollections._immutableset.ImmutableSet[adam.relation.Relation[adam.ontology.OntologyNode]]
subjects_to_relations: immutablecollections._immutablemultidict.ImmutableSetMultiDict[adam.ontology.OntologyNode, adam.relation.Relation[adam.ontology.OntologyNode]]
ancestors(node)
Return type

Iterable[OntologyNode]

structural_schemata(node)
Return type

AbstractSet[ObjectStructuralSchema]

is_subtype_of(node, query_supertype)

Determines whether node is a sub-type of query_supertype.

Return type

bool

nodes_with_properties(root_node, required_properties=i{}, *, banned_properties=i{}, banned_ontology_types=i{})

Get all OntologyNodes which are a dominated by root_node (or are root_node itself) which (a) possess all the required_properties, (b) possess none of the banned_properties, either directly or by inheritance from a dominating node, and (c) are not identical to or descendants of any of banned_ontology_types.

Parameters
Return type

ImmutableSet[OntologyNode]

Returns

All OntologyNodes which are a dominated by root_node (or are root_node itself) which possess all the required_properties and none of the banned_properties, either directly or by inheritance from a dominating node, and are not contained in or dominated by any of banned_ontology_types.

descends_from(node, query_ancestors)
Return type

bool

has_property(node, query_property)

Checks whether an OntologyNode has a given property either directly or via inheritance.

Parameters
Return type

bool

Returns

Whether node possesses query_property, either directly or via inheritance from a dominating node.

has_all_properties(node, query_properties, *, banned_properties=i{})

Checks if an OntologyNode has the given properties, either directly or by inheritance..

Parameters
Return type

bool

Returns

Whether node possesses all of query_properties and none of banned_properties, either directly or via inheritance from a dominating node.

properties_for_node(node)

Get all properties a OntologyNode possesses.

Parameters

node (OntologyNode) – the OntologyNode whose properties you want.

Return type

ImmutableSet[OntologyNode]

Returns

All properties OntologyNode possesses, whether directly or by inheritance from a dominating node.

required_action_description(action_type, semantic_roles)
Return type

ActionDescription

adam.language.ontology_dictionary

Mappings from Ontologys to particular languages.

class adam.language.ontology_dictionary.OntologyLexicon(ontology, ontology_node_to_word)

A mapping from OntologyNodes to words.

This will become more sophisticated in the future.

ontology: adam.ontology.ontology.Ontology

The ontology this lexicon assigns words to.

words_for_node(node)

Get the translation for an OntologyNode, if available.

Parameters

node (OntologyNode) – The OntologyNode whose translation you want.

Return type

ImmutableSet[LexiconEntry]

Returns

The translation, if available.

Supporting classes: Linguistic Representation

adam.language.language_generator

Ways to produce human language descriptions of Situations by rule.

class adam.language.language_generator.LanguageGenerator(*args, **kwds)

A way of describing Situations using human LinguisticDescriptions.

abstract generate_language(situation, chooser)

Generate a collection of human language descriptions of the given Situation. :type situation: ~SituationT :param situation: the Situation to describe. :type chooser: SequenceChooser :param chooser: the SequenceChooser to use if any random decisions are required.

Return type

ImmutableSet[~LinguisticDescriptionT]

Returns

A LinguisticDescription of that situation.

class adam.language.language_generator.ChooseFirstLanguageGenerator(wrapped_generator)

A LanguageGenerator used to wrap another LanguageGenerator and discard all but its first generated option.

generate_language(situation, chooser)

Generate a collection of human language descriptions of the given Situation. :type situation: ~SituationT :param situation: the Situation to describe. :type chooser: SequenceChooser :param chooser: the SequenceChooser to use if any random decisions are required.

Return type

ImmutableSet[~LinguisticDescriptionT]

Returns

A LinguisticDescription of that situation.

class adam.language.language_generator.ChooseRandomLanguageGenerator(wrapped_generator, sequence_chooser)

A LanguageGenerator used to wrap another LanguageGenerator and discard all but one of its descriptions, selected at random using the provided SequenceChooser .

generate_language(situation, chooser)

Generate a collection of human language descriptions of the given Situation. :type situation: ~SituationT :param situation: the Situation to describe. :type chooser: SequenceChooser :param chooser: the SequenceChooser to use if any random decisions are required.

Return type

ImmutableSet[~LinguisticDescriptionT]

Returns

A LinguisticDescription of that situation.

class adam.language.language_generator.SingleObjectLanguageGenerator(ontology_lexicon)

LanguageGenerator for describing a single object.

Describes a Situation containing a SituationObject with a single word: its name according to some OntologyLexicon.

This language generator will throw a ValueError if it receives any situation which contains either multiple SituationObjects.

generate_language(situation, chooser)

Generate a collection of human language descriptions of the given Situation. :type situation: LocatedObjectSituation :param situation: the Situation to describe. :type chooser: SequenceChooser :param chooser: the SequenceChooser to use if any random decisions are required.

Return type

ImmutableSet[TokenSequenceLinguisticDescription]

Returns

A LinguisticDescription of that situation.

class adam.language.language_generator.InSituationLanguageGenerator

For situations which receive external perception processing the language is embedded in the situation.

generate_language(situation, chooser)

Generate a collection of human language descriptions of the given Situation. :type situation: SimulationSituation :param situation: the Situation to describe. :type chooser: SequenceChooser :param chooser: the SequenceChooser to use if any random decisions are required.

Return type

ImmutableSet[TokenSequenceLinguisticDescription]

Returns

A LinguisticDescription of that situation.

adam.language.lexicon

Data structures for human language words.

These are not used by the TopLevelLanguageLearner, but rather for generating the linguistic descriptions for situations.

class adam.language.lexicon.LexiconEntry(base_form, part_of_speech, properties=i{}, intrinsic_morphosyntactic_properties=i{}, *, counting_classifier=None, plural_form=None, verb_form_sg3_prs=None)
base_form: str

The base linguistic form for a LexiconEntry.

What form is chosen as the base form varies by part-of-speech and from language to language.

For example, in English, the base form for nouns might be the singular, while the base form for verbs might be the present tense.

part_of_speech: adam.language.dependency.PartOfSpeechTag
properties: immutablecollections._immutableset.ImmutableSet[adam.language.lexicon.LexiconProperty]
intrinsic_morphosyntactic_properties: immutablecollections._immutableset.ImmutableSet[adam.language.dependency.MorphosyntacticProperty]
counting_classifier: Optional[str]
plural_form: Optional[str]
verb_form_sg3_prs: Optional[str]
class adam.language.lexicon.LexiconProperty(name)

A linguistic property that a LexiconEntry may possess.

For example, singular, active, etc.

name: str

adam.language.dependency

Representations for dependency trees

class adam.language.dependency.DependencyTree(graph)

A syntactic dependency tree.

This consists of DependencyTreeTokens connected by edges labelled with DependencyRoles. Edges run from modifiers to heads.

Note a DependencyTree is not a LinguisticDescription because it does not provide a surface token string, since the dependencies are unordered.

You can pair a DependencyTree with a surface order to create an LinearizedDependencyTree.

root: adam.language.dependency.DependencyTreeToken

The unique root DependencyTreeToken of the tree.

This is the single token which does not modify any other token.

tokens: immutablecollections._immutableset.ImmutableSet[adam.language.dependency.DependencyTreeToken]

The set of all DependencyTreeTokens appearing in this tree.

modifiers(head)

All DependencyTreeTokens modifying head and their DependencyRoles.

Return type

ImmutableSet[Tuple[DependencyTreeToken, DependencyRole]]

Returns

A set of (DependencyTreeToken, DependencyRole) tuples corresponding to all modifications of head.

class adam.language.dependency.LinearizedDependencyTree(dependency_tree, surface_token_order=(), *, accurate=True)

A DependencyTree paired with a surface word order.

dependency_tree: adam.language.dependency.DependencyTree
surface_token_order: Tuple[adam.language.dependency.DependencyTreeToken, ...]
surface_token_strings: Tuple[str, ...]
accurate: bool

Used to specify if the language here matches the paired situation

as_token_sequence()

Get this description as a tuple of token strings.

Return type

Tuple[str, …]

Returns

A tuple of token strings describing this LinguisticDescription

class adam.language.dependency.PartOfSpeechTag(name)

Part-of-speech tags.

For example, “noun”, “verb”, etc.

Every DependencyTreeToken must be assigned one of these. We provide constants for the Universal Dependencies POS tags in adam.language.dependency.universal_dependencies.

name: str
class adam.language.dependency.MorphosyntacticProperty(name)
name: str
class adam.language.dependency.DependencyTreeToken(token, part_of_speech, morphosyntactic_properties=i{})

A single word in a DependencyTree

token: str
part_of_speech: adam.language.dependency.PartOfSpeechTag
morphosyntactic_properties: immutablecollections._immutableset.ImmutableSet[adam.language.dependency.MorphosyntacticProperty]
class adam.language.dependency.DependencyRole(name)

The syntactic relationship between two nodes in a DependencyTree.

We provide constants for the Universal Dependencies syntactic relations in adam.language.dependency.universal_dependencies.

name: str
class adam.language.dependency.DependencyTreeLinearizer

A method for supplying a particular order to the words in a DependencyTree.

abstract linearize(dependency_tree)

Determine a surface word order for a DependencyTree . :type dependency_tree: DependencyTree :param dependency_tree: The DependencyTree to determine a surface word order for.

Return type

LinearizedDependencyTree

Returns

A LinearizedDependencyTree pairing the input DependencyTree with a surface word order.

adam.language.dependency.HEAD = head

A special DependencyRole used to indicate the position of the head word itself when constructing a RoleOrderDependencyTreeLinearizer.

class adam.language.dependency.RoleOrderDependencyTreeLinearizer(head_pos_to_role_order=i{})

Assigns an order to the words of a DependencyTree by ordering modifiers relation to their head based only on the syntactic relation they have to the head.

The ordering of multiple modifiers with the same syntactic relation is undefined (Issue #57).

linearize(dependency_tree)

Determine a surface word order for a DependencyTree . :type dependency_tree: DependencyTree :param dependency_tree: The DependencyTree to determine a surface word order for.

Return type

LinearizedDependencyTree

Returns

A LinearizedDependencyTree pairing the input DependencyTree with a surface word order.

adam.language.dependency.universal_dependencies

Universal Dependecies part-of-speech tags and syntactic relations.

These are provided for convenience when writing LanguageGenerators.

See https://universaldependencies.org/

Supporting classes: Perceptual Representation

adam.perception.developmental_primitive_perception

class adam.perception.developmental_primitive_perception.DevelopmentalPrimitivePerceptionFrame(perceived_objects, *, axis_info=AxesInfo(addressee=None, axes_facing=i{}), property_assertions=i{}, relations=i{})

A static snapshot of a Situation based on developmentally-motivated perceptual primitives.

This represents a situation as

This is the default perceptual representation for at least the first phase of the ADAM project.

perceived_objects: immutablecollections._immutableset.ImmutableSet[adam.perception.ObjectPerception]

a set of ObjectPerceptions, with one corresponding to each object in the scene (e.g. a ball, Mom, Dad, etc.)

axis_info: adam.axes.AxesInfo[adam.perception.ObjectPerception]
property_assertions: immutablecollections._immutableset.ImmutableSet[adam.perception.developmental_primitive_perception.PropertyPerception]

a set of PropertyPerceptions which associate a ObjectPerception with perceived properties of various sort (e.g. color, sentience, etc.)

relations: immutablecollections._immutableset.ImmutableSet[adam.relation.Relation[adam.perception.ObjectPerception]]
a set of Relations which describe the learner’s perception of how two

ObjectPerceptions are related.

Symmetric relations should be included as two separate relations, one in each direction.

class adam.perception.developmental_primitive_perception.PropertyPerception(perceived_object)

A learner’s perception that the perceived_object possesses a certain property.

The particular property is specified in a sub-class dependent way.

perceived_object
class adam.perception.developmental_primitive_perception.HasBinaryProperty(perceived_object, binary_property)

A learner’s perception that perceived_object possesses the given flag_property.

binary_property
class adam.perception.developmental_primitive_perception.RgbColorPerception(red, green, blue)

A perceived color.

red: int
green: int
blue: int
inverse()
Return type

RgbColorPerception

property hex: str
Return type

str

class adam.perception.developmental_primitive_perception.HasColor(perceived_object, color)

A learner’s perception that perceived_object has the RgbColorPerception color.

color

adam.perception.high_level_semantics_situation_to_developmental_primitive_perception

..automodule:: adam.perception.high_level_semantics_situation_to_developmental_primitive_perception

adam.perception.perception_graph

Code for representing DevelopmentalPrimitivePerceptionFrames as directed graphs and for matching patterns against such graph. Such patterns could be used to implement object recognition, among other things.

This file first defines PerceptionGraphs, then defines PerceptionGraphPatterns to match them.

Readers should start with PerceptionGraphProtocol, PerceptionGraph, and PerceptionGraphPattern before reading other parts of this module.

class adam.perception.perception_graph.Incrementer(initial_value=0)
value()
Return type

int

increment(amount=1)
Return type

None

adam.perception.perception_graph.EdgeLabel = typing.Union[adam.ontology.OntologyNode, str, adam.ontology.phase1_spatial_relations.Direction[typing.Any]]

This is the core information stored on a perception graph edge. This is wrapped in TemporallyScopedEdgeLabel before actually being applied to a dynamic DiGraph edge.

adam.perception.perception_graph.assert_valid_edge_label(base_edge_label)
Return type

None

adam.perception.perception_graph.valid_edge_label(inst, attr, value)

Wraps assert_valid_edge_label for use as an attrs validator.

Return type

None

class adam.perception.perception_graph.TemporalScope(value)

In a dynamic situation, specifies the relationship of perception graph edges to the perception frames.

BEFORE = 'before'

Indicates a relationship holds in the first frame.

AFTER = 'after'

Indicates a relationship holds in the second frame.

DURING = 'during'

Indicates a relationship continuously holds in the interval between frames.

AT_SOME_POINT = 'at-some-point'

Indicates a relationship holds at some point in the interval between frames.

class adam.perception.perception_graph.TemporallyScopedEdgeLabel(attribute, temporal_specifiers=i{})

An edge attribute in a PerceptionGraph which is annotated for what times it holds true.

These should only be used in PerceptionGraphs representing dynamic situations, in which every edge label should be wrapped with this class.

attribute: Union[adam.ontology.OntologyNode, str, adam.ontology.phase1_spatial_relations.Direction[Any]]
temporal_specifiers: immutablecollections._immutableset.ImmutableSet[adam.perception.perception_graph.TemporalScope]
static for_dynamic_perception(attribute, when=typing.Union[adam.perception.perception_graph.TemporalScope, typing.Iterable[adam.perception.perception_graph.TemporalScope]])
Return type

TemporallyScopedEdgeLabel

adam.perception.perception_graph.REFERENCE_OBJECT_LABEL = reference-object

Edge label in a PerceptionGraph linking a Region to its reference object.

adam.perception.perception_graph.PRIMARY_AXIS_LABEL = primary-axis

Edge label in a PerceptionGraph linking a Geon to its primary GeonAxis.

adam.perception.perception_graph.HAS_AXIS_LABEL = has-axis

Edge label in a PerceptionGraph linking any node of type HasAxes to its associated axes.

adam.perception.perception_graph.GENERATING_AXIS_LABEL = generating-axis

Edge label in a PerceptionGraph linking a Geon to its generating GeonAxis.

adam.perception.perception_graph.HAS_GEON_LABEL = geon

Edge label in a PerceptionGraph linking an ObjectPerception to its associated Geon.

adam.perception.perception_graph.HAS_PROPERTY_LABEL = has-property

Edge label in a PerceptionGraph linking an ObjectPerception to its associated PropertyPerception.

adam.perception.perception_graph.FACING_OBJECT_LABEL = facing-axis

Edge label in a PerceptionGraph linking a GeonAxis to an ObjectPerception it is facing

adam.perception.perception_graph.REFERENCE_AXIS_LABEL = reference-axis

Edge label in a PerceptionGraph linking a SpatialPath to its reference axis.

adam.perception.perception_graph.HAS_PATH_LABEL = has-path

Edge label in a PerceptionGraph linking an object to the SpatialPath which it takes in a dynamic situation.

adam.perception.perception_graph.HAS_PATH_OPERATOR = has-path-operator

Edge label in a PerceptionGraph linking a SpatialPath to its PathOperator

adam.perception.perception_graph.HAS_STROKE_LABEL = has-stroke-label

Edge label in a PerceptionGraph linking an object cluster to one of the `ObjectStroke`s which make up the object.

adam.perception.perception_graph.ADJACENT_STROKE_LABEL = adjacent-stroke-label

Edge label in a PerceptionGraph linking an ObjectStroke to another ObjectStroke to indicate the two are adjacent. The relationship is symmetric even though the edge is not.

adam.perception.perception_graph.ORIENTATION_CHANGED_PROPERTY = orientation-changed

Property used in perception graphs to indicate an orientation change while an object traverses a path.

class adam.perception.perception_graph.PerceptionGraphProtocol(*args, **kwds)
dynamic: bool
copy_as_digraph()
Return type

DiGraph

render_to_file(graph_name, output_file, *, match_correspondence_ids=i{}, robust=True, replace_node_labels=i{})

Debugging tool to render the graph to PDF using dot.

Return type

None

text_dump()
Return type

str

class adam.perception.perception_graph.PerceptionGraph(graph, dynamic=False)

Represents a DevelopmentalPrimitivePerceptionFrame as a directed graph.

Perception graphs may be static (representing a single snapshot of a situation) or dynamic, representing a changing situation. This is encoded by the _dynamic field.

ObjectPerceptions, properties, Geons, GeonAxiss, and Regions are nodes. Edges should have the label attribute mapped to an EdgeLabel, if the graph is static, or to TemporallyScopedEdgeLabel, if dynamic.

These can be matched against by PerceptionGraphPatterns.

dynamic: bool
static from_frame(frame)

Gets the PerceptionGraph corresponding to a DevelopmentalPrimitivePerceptionFrame.

Return type

PerceptionGraph

static from_dynamic_perceptual_representation(perceptual_representation)
Return type

PerceptionGraph

static add_temporal_scopes_to_edges(digraph, temporal_scopes)

Modifies the given digraph in place, applying the given TemporalScopes to all edges. This new graph will be dynamic.

Note that this should only be applied to static perception digraphs.

Return type

DiGraph

static from_simulated_frame(frame)
Return type

PerceptionGraph

static from_dynamic_simulated_perception_frame(perceptual_representation)
Return type

PerceptionGraph

copy_with_temporal_scopes(temporal_scopes)

Produces a copy of this perception graph with the given TemporalScopes applied to all edges. This new graph will be dynamic.

This graph must be a static graph or a RuntimeError will be raised.

Return type

PerceptionGraph

subgraph_by_nodes(nodes_to_keep)
Return type

PerceptionGraph

count_nodes_matching(node_predicate)
render_to_file(graph_name, output_file, *, match_correspondence_ids=i{}, robust=True, replace_node_labels=i{})

Debugging tool to render the graph to PDF using dot.

If this graph has been matched against a pattern, the matched nodes can be highlighted and given labels which show what the correspond to in a pattern by supplying match_correspondence_ids which maps graph nodes to the desired correspondence labels.

If robust is True (the default), then this will suppress crashes on failures.

Return type

None

successors(graph_node)
class adam.perception.perception_graph.PerceptionGraphPattern(graph, dynamic=False)

A pattern which can match PerceptionGraphs.

Such patterns could be used, for example, to represent a learner’s knowledge of an object for object recognition.

dynamic: bool
check_isomorphism(other_graph)

Compares two pattern graphs and returns true if they are isomorphic, including edges and node attributes.

Return type

bool

matcher(graph_to_match_against, *, debug_callback=None, match_mode, allowed_matches=i{})

Creates an object representing an attempt to match this pattern against graph_to_match_against.

Return type

PatternMatching

static from_schema(object_schema, *, perception_generator)

Creates a pattern for recognizing an object based on its object_schema.

Return type

PerceptionGraphPattern

static from_graph(perception_graph)

Creates a pattern for recognizing an object based on its perception_graph.

Return type

PerceptionGraphPatternFromGraph

static from_ontology_node(node, ontology, *, perception_generator)

Creates a pattern for recognizing an obect based on its ontology_node

Return type

PerceptionGraphPattern

static phase3_pattern(node)
Return type

PerceptionGraphPattern

pattern_complexity()
Return type

int

copy_with_temporal_scopes(required_temporal_scopes)

Produces a copy of this perception graph pattern where all edge predicates now require that the edge in the target graph being matched hold at all of the required_temporal_scopes.

The new pattern will be dynamic.

This pattern must be a static graph or a RuntimeError will be raised.

Return type

PerceptionGraphPattern

count_nodes_matching(node_predicate)
Return type

int

render_to_file(graph_name, output_file, *, match_correspondence_ids=i{}, robust=True, replace_node_labels=i{})

Debugging tool to render the pattern to PDF using dot.

If this pattern has been matched against a PerceptionGraph, the matched nodes can be highlighted and given labels which show what the correspond to in a pattern by supplying match_correspondence_ids which maps graph nodes to the desired correspondence labels.

Return type

None

intersection(graph_pattern, *, debug_callback=None, graph_logger=None, ontology, allowed_matches=i{}, match_mode, trim_after_match=None)

Determine the largest partial match between two `PerceptionGraphPattern`s

The algorithm used is approximate and is not guaranteed to return the largest possible match.

Return type

Optional[PerceptionGraphPattern]

class adam.perception.perception_graph.DumpPartialMatchCallback(render_path, seconds_to_wait_before_rendering=0, dump_every_x_calls=100)

Helper callable class for debugging purposes. An instance of this object can be provided as the debug_callback argument of GraphMatching.match to render the match search process at every 100 time steps. We start rendering after the first 60 seconds.

class adam.perception.perception_graph.PerceptionGraphPatternFromGraph(perception_graph_pattern, perception_graph_node_to_pattern_node)

See PerceptionGraphPattern.from_graph

perception_graph_pattern: adam.perception.perception_graph.PerceptionGraphPattern
perception_graph_node_to_pattern_node: immutablecollections._immutabledict.ImmutableDict[Union[adam.perception.ObjectPerception, adam.ontology.OntologyNode, Tuple[adam.ontology.phase1_spatial_relations.Region[Any], int], Tuple[adam.geon.Geon, int], adam.axis.GeonAxis, adam.geon.CrossSection, adam.semantics.SemanticNode, adam.ontology.phase1_spatial_relations.SpatialPath[adam.perception.ObjectPerception], adam.ontology.phase1_spatial_relations.PathOperator, adam.perception.perception_graph_nodes.GraphNode, adam.perception.perception_graph_nodes.ObjectStroke], adam.perception.perception_graph_predicates.NodePredicate]
class adam.perception.perception_graph.PatternMatching(pattern, graph_to_match_against, matching_pattern_against_pattern=NOTHING, *, match_mode, debug_callback=None, allowed_matches=i{})

An attempt to align a PerceptionGraphPattern to nodes in a PerceptionGraph.

This is equivalent to finding a sub-graph of graph_to_match which is isomorphic to pattern. Currently we only handle node-induced sub-graph isomorphisms, but we might really want edge-induced: https://github.com/isi-vista/adam/issues/400

pattern: adam.perception.perception_graph.PerceptionGraphPattern
graph_to_match_against: adam.perception.perception_graph.PerceptionGraphProtocol
matching_pattern_against_pattern: bool
debug_callback: Optional[Callable[[networkx.classes.digraph.DiGraph, Dict[Any, Any]], None]]
allowed_matches: immutablecollections._immutablemultidict.ImmutableSetMultiDict[Any, Any]
class MatchFailure(*, pattern, graph, pattern_node_to_graph_node_for_largest_match, last_failed_pattern_node, largest_match_pattern_subgraph=NOTHING, largest_match_graph_subgraph=NOTHING)

Indicates a failed attempt at matching a PerceptionGraphPattern.

pattern_node_to_graph_node_for_largest_match indicates the partial match found with the largest number of nodes. Note that this is not necessarily the largest possible partial match, just the largest one encountered during the search process before the algorithm decided that a full match was not possible.

last_failed_pattern_node is that the last pattern node attempted to be matched at the point the algorithm decided a match was impossible. Note that this is not necessarily the node responsible for the match failure; it could fail due to the edge predicate on the connecting edge, or it could fail due to there being no proper match for a pattern node which had no good alignment earlier in the match process.

pattern: adam.perception.perception_graph.PerceptionGraphPattern
graph: adam.perception.perception_graph.PerceptionGraphProtocol
pattern_node_to_graph_node_for_largest_match: Mapping[Any, Any]
last_failed_pattern_node: adam.perception.perception_graph_predicates.NodePredicate
largest_match_pattern_subgraph: adam.perception.perception_graph.PerceptionGraphPattern
largest_match_graph_subgraph: networkx.classes.digraph.DiGraph
matches(*, use_lookahead_pruning, suppress_multiple_alignments_to_same_nodes=True, initial_partial_match=i{}, graph_logger=None)

Attempt the matching and returns a generator over the set of possible matches.

If suppress_multiple_alignments_to_same_nodes is True (default True), then only the first alignment encountered for a given set of nodes will be returned. This prevents you from e.g. getting multiple matches for different ways of aligning axes for symmetric objects. The cost is that we need to keep around a memory of previous node matches.

matching_pattern_against_pattern indicates you are matching one pattern against another, rather than against a perception graph. This should get split off into its own distinct method: https://github.com/isi-vista/adam/issues/487

Return type

Iterable[PerceptionGraphPatternMatch]

first_match_or_failure_info(*, initial_partial_match=i{}, graph_logger=None)

Gets the first match encountered of the pattern against the graph (which one is first is deterministic but undefined) or a PatternMatching.MatchFailure giving debugging information about a failed match attempt.

Return type

Union[PerceptionGraphPatternMatch, MatchFailure]

relax_pattern_until_it_matches(*, graph_logger=None, ontology, min_ratio=None, trim_after_match)

Prunes or relaxes the pattern for this matching until it successfully matches using heuristic rules.

If a matching relaxed PerceptionGraphPattern can be found, it is returned. Otherwise, None is returned.

Return type

Optional[PerceptionGraphPattern]

class adam.perception.perception_graph.PerceptionGraphPatternMatch(*, matched_pattern, graph_matched_against, matched_sub_graph, pattern_node_to_matched_graph_node)

Represents a match of a PerceptionGraphPattern against a PerceptionGraph.

matched_pattern: adam.perception.perception_graph.PerceptionGraphPattern
graph_matched_against: adam.perception.perception_graph.PerceptionGraphProtocol
matched_sub_graph: adam.perception.perception_graph.PerceptionGraphProtocol
pattern_node_to_matched_graph_node: Mapping[adam.perception.perception_graph_predicates.NodePredicate, Union[adam.perception.ObjectPerception, adam.ontology.OntologyNode, Tuple[adam.ontology.phase1_spatial_relations.Region[Any], int], Tuple[adam.geon.Geon, int], adam.axis.GeonAxis, adam.geon.CrossSection, adam.semantics.SemanticNode, adam.ontology.phase1_spatial_relations.SpatialPath[adam.perception.ObjectPerception], adam.ontology.phase1_spatial_relations.PathOperator, adam.perception.perception_graph_nodes.GraphNode, adam.perception.perception_graph_nodes.ObjectStroke]]

A mapping of pattern nodes from matched_pattern to the nodes in matched_sub_graph they were aligned to.

class adam.perception.perception_graph.EdgePredicate

Super-class for pattern graph edges.

abstract dot_label()

Edge label to use when rendering patterns as graphs using dot.

Return type

str

reverse_in_dot_graph()

In the dot graph, should this edge be treated as reversed for layout purposes? (dot tries to put the sources of edges to the left of the destinations)

Return type

bool

abstract matches_predicate(edge_predicate)

Returns whether edge_label matches self

Return type

bool

class adam.perception.perception_graph.HoldsAtTemporalScopePredicate(wrapped_edge_predicate, temporal_scopes)

EdgePredicate which matches an edge with a TemporallyScopedEdgeLabel whose attribute matches wrapped_edge_predicate and which has at least the temporal scope temporal_scope (but may have others).

wrapped_edge_predicate: adam.perception.perception_graph.EdgePredicate
temporal_scopes: immutablecollections._immutableset.ImmutableSet[adam.perception.perception_graph.TemporalScope]
dot_label()

Edge label to use when rendering patterns as graphs using dot.

Return type

str

matches_predicate(edge_predicate)

Returns whether edge_label matches self

Return type

bool

class adam.perception.perception_graph.AnyEdgePredicate

EdgePredicate which matches any edge.

dot_label()

Edge label to use when rendering patterns as graphs using dot.

Return type

str

matches_predicate(edge_predicate)

Returns whether edge_label matches self

Return type

bool

class adam.perception.perception_graph.RelationTypeIsPredicate(relation_type)

EdgePredicate which matches a relation of the given type.

relation_type: adam.ontology.OntologyNode
dot_label()

Edge label to use when rendering patterns as graphs using dot.

Return type

str

reverse_in_dot_graph()

In the dot graph, should this edge be treated as reversed for layout purposes? (dot tries to put the sources of edges to the left of the destinations)

Return type

bool

matches_predicate(edge_predicate)

Returns whether edge_label matches self

Return type

bool

class adam.perception.perception_graph.DirectionPredicate(reference_direction)

EdgePredicate which matches a Direction object annotating an edge between a Region and an ObjectPerception.

reference_direction: adam.ontology.phase1_spatial_relations.Direction[Any]
dot_label()

Edge label to use when rendering patterns as graphs using dot.

Return type

str

static exactly_matching(direction)
Return type

DirectionPredicate

matches_predicate(edge_predicate)

Returns whether edge_label matches self

Return type

bool

adam.perception.perception_graph.GOVERNED = governed

An object match governed in a preposition relationship

adam.perception.perception_graph.MODIFIED = modified

An object match modified in a preposition relationship

class adam.perception.perception_graph.GraphLogger(log_directory, enable_graph_rendering, serialize_graphs=False)
log_directory: pathlib.Path
enable_graph_rendering: bool
serialize_graphs: bool
call_count: int
log_graph(graph, level, msg, *args, match_correspondence_ids=i{}, graph_name=None)
Return type

None

log_match_failure(match_failure, level, msg, *args, graph_name=None)
Return type

None

log_pattern_match(pattern_match, level, msg, *args, graph_name=None)
Return type

None

adam.perception.perception_graph.raise_graph_exception(exception_message, graph)
adam.perception.perception_graph.edge_equals_ignoring_temporal_scope(edge_label, query)
Return type

bool

adam.perception.perception_graph.get_features_from_semantic_node(semantic_root, perception_graph)
Return type

Sequence[str]

adam.perception.marr

Visual Representation based on

David Marr, Vision, Chapter 5 “Presenting Shapes for Recognition”.

This code and its comments presumes familiarity with that chapter; the book should be available from any university library (usually digitally).

Marr’s represent objects in terms of hierarchical structures with object-centered coordinate systems, which he calls “models”. Marr’s exposition does not sharply distinguish a model-as-a-representation-of-a-particular-object-instance from model-as-a-representation-of-an-object-class. In code we need to be a little more explicit, so we call the former a Marr3dObject and the latter a Marr3dModel.

Unfortunately, Python’s lack of forward declarations means the classes in this module are not declared in the best order for reading. I suggest reading them in the following order:

These describe the model-as-representation-of-a-particular-object-instance (e.g. the particular truck I see right now): - Marr3dObject - Cylinder - AdjunctRelation

These describe model-as-a-representation-of-an-object-class (e.g. trucks in general): - Marr3dModel - CylinderRange - AdjunctRelationRange

These describe the structures which can be used to match instances to class (that is, to identify what it is you are looking at): - MarrModelIndex - SpecificityIndex - SpecificityIndexNode

class adam.perception.marr.Cylinder(*, length_in_meters, diameter_in_meters)

A cylinder, irrespective of orientation.

Marr’s representation builds objects up from generalized cylinders; right now we only represent cylinders with circular cross-sections.

length_in_meters: float
diameter_in_meters: float
class adam.perception.marr.CylinderRange(*, length_range_in_meters, diameter_range_in_meters)

Represents a set of possible cylinders.

This is used for object recognition.

length_range_in_meters: vistautils.range.Range[float]
diameter_range_in_meters: vistautils.range.Range[float]
class adam.perception.marr.AdjunctRelation(*, s_relative_to_reference_length, orientation)

Specifies the location and orientation of an cylinder \(S\) with respect to another cylinder \(A\).

“Two three-dimensional vectors are required to specify the position in space of one axis relative to another… The first vector, written in cylindrical coordinates \((p, r, \Theta)\), defines the starting point of \(S\) relative to \(A\); the second vector, written in spherical coordinates \((\iota, \phi, s)\), specified \(S\) itself. We shall call the combined specification \((p, r, \Theta, \iota, \phi, s)\) an adjunct relation for \(S\) relative to \(A\).” ~ Marr, Vision, p. 308

The above dimensions are explained in the figure below:

_images/marr-adjunct-relation-figure.png

We maintain the coordinate names from Marr, but we split those which are also applicable to Marr3dObjects into the class AdjunctRelation.Orientation for they can be reused there.

class Orientation(*, theta_in_degrees, iota_in_degrees, phi_in_degrees, p_relative_to_reference_length, r_relative_to_reference_width)
theta_in_degrees: float
iota_in_degrees: float
phi_in_degrees: float
p_relative_to_reference_length: float
r_relative_to_reference_width: float
static create_same_as_reference_cylinder()
Return type

Orientation

s_relative_to_reference_length: float
orientation: adam.perception.marr.AdjunctRelation.Orientation
static create_same_as_reference_cylinder()

Gets the adjunct relation which specifies that the described cylinder is exactly the same as the reference cylinder.

This is frequently used for defining the principal cylinders of Marr3dModels.

Return type

AdjunctRelation

class adam.perception.marr.AdjunctRelationRange(*, p_relative_to_reference_length, r_relative_to_reference_width, theta_in_degrees, iota_in_degrees, phi_in_degrees, s_relative_to_reference_length)

Represents a set of acceptable adjunct relations for object identification.

“Because the precision with which 3-D model scan represent a shape varies, it is appropriate to represent the angles and lengths that occur in an adjunct relation in a system that is also capable of variable precision, For instance, one might wish to state that a particular axis, like the arm component of the human 3-D model… is connected rather precisely at one end of the torso (that is, that value of \(p\) is exactly \(0\)), but with \(\Theta\) only coarsely specified and with very little restriction on \(\iota\).

p_relative_to_reference_length: vistautils.range.Range[float]
r_relative_to_reference_width: vistautils.range.Range[float]
theta_in_degrees: vistautils.range.Range[float]
iota_in_degrees: vistautils.range.Range[float]
phi_in_degrees: vistautils.range.Range[float]
s_relative_to_reference_length: vistautils.range.Range[float]
class adam.perception.marr.Marr3dObject(*, bounding_cylinder, principal_cylinder, components=i{})

A Marr-ian representation of a particular object instance (e.g. a particular truck I see, not trucks in general).

An object is defined by:

  • A bounding_cylinder specifying a Cylinder (which Marr calls an “Axis”) which bounds the object.

  • components, a typing.Mapping of sub-objects to AdjunctRelation.Orientations describing their orientation relative to the primary model. We only need the orientation information rather than the full adjunct relation because Marr3dObject‘s possess absolute size information.

  • a principal_cylinder which defines the Cylinder that sub-object positions and orientations are defined with respect to. This is normally the model_axis but may differ in some cases. To use Marr’s example, for a “human being” object it is more convenient to specify the sub-components with respect to the “torso” axis than the bounding cylinder axis.

TODO: add orientation information - https://github.com/isi-vista/adam/issues/21

bounding_cylinder: adam.perception.marr.Cylinder
principal_cylinder: adam.perception.marr.Cylinder
components: immutablecollections._immutabledict.ImmutableDict[adam.perception.marr.Marr3dObject, adam.perception.marr.AdjunctRelation.Orientation]
static create_from_bounding_cylinder(bounding_cylinder)
Return type

Marr3dObject

class adam.perception.marr.Marr3dModel(*, bounding_cylinder_range, principal_cylinder_relative_to_bounding_cylinder, components)

A Marr-ian representation of a recognition model for a class of instances (e.g. “trucks in general”, not a particular truck).

A model is defined by:

  • A bounding_cylinder_range specifying the constraints on the possible sizes of the bounding_cylinder of any Marr3dObject matching this model. Marr calls this the “model axis.”

  • a principal_cylinder_relative_to_bounding_cylinder which gives the AdjunctRelation specifying the size and orientation of the principal cylinder for this model relative to the bounding cylinder. Marr refers to this as the “principal axis”; the sizes and orientations of all sub-components are relative to this axis/cylinder.

    This adjunct relation often specifies a cylinder equal to the bounding cylinder, but may differ in some cases. To use Marr’s example, for a “human being” object it is more convenient to specify the sub-components with respect to the “torso” cylinder/axis than the bounding cylinder cylinder/axis.

  • components, a typing.Mapping of sub-models to AdjunctRelations describing their orientation relative to the primary model.

bounding_cylinder_range: adam.perception.marr.CylinderRange
principal_cylinder_relative_to_bounding_cylinder: adam.perception.marr.AdjunctRelation
components: Mapping[adam.perception.marr.Marr3dModel, adam.perception.marr.AdjunctRelation]
adam.perception.marr.feet_to_meters(feet)
Return type

float

adam.perception.marr.inches_to_meters(inches)
Return type

float

Supporting classes: Curricula

adam.curriculum

Code to specify what is shown to TopLevelLanguageLearners and in what order.

class adam.curriculum.InstanceGroup(*args, **kwds)

An InstanceGroup can provide triples of (optional) Situations, LinguisticDescriptions, and PerceptualRepresentations for use in training or testing TopLevelLanguageLearners with the Experiment class.

abstract name()

A human-readable name for this instance group.

Return type

str

abstract instances()

The instances in the order they should be shown to the TopLevelLanguageLearner.

Return type

Iterable[Tuple[Optional[~SituationT], ~LinguisticDescriptionT, PerceptualRepresentation[~PerceptionT]]]

class adam.curriculum.ExplicitWithoutSituationInstanceGroup(name, instances)

A collection of instances where the user explicitly specifies the LinguisticDescriptions and PerceptualRepresentations but not the Situations.

name()

A human-readable name for this instance group.

Return type

str

instances()

The instances in the order they should be shown to the TopLevelLanguageLearner.

Return type

Iterable[Tuple[~SituationT, ~LinguisticDescriptionT, PerceptualRepresentation[~PerceptionT]]]

class adam.curriculum.ExplicitWithSituationInstanceGroup(name, instances)

A collection of instances where the user explicitly specifies the the Situations, LinguisticDescriptions, and PerceptualRepresentations.

name()

A human-readable name for this instance group.

Return type

str

instances()

The instances in the order they should be shown to the TopLevelLanguageLearner.

Return type

Iterable[Tuple[Optional[~SituationT], ~LinguisticDescriptionT, PerceptualRepresentation[~PerceptionT]]]

class adam.curriculum.GeneratedFromSituationsInstanceGroup(name, situations, language_generator, perception_generator, chooser)

Creates a collection of instances by taking an iterable of Situations and deriving the LinguisticDescriptions and PerceptualRepresentations by applying the language_generator and perception_generator, respectively.

name()

A human-readable name for this instance group.

Return type

str

instances()

The instances in the order they should be shown to the TopLevelLanguageLearner.

Return type

Iterable[Tuple[Optional[~SituationT], ~LinguisticDescriptionT, PerceptualRepresentation[~PerceptionT]]]

class adam.curriculum.AblatedLanguageSituationsInstanceGroup(name, instances)
name()

A human-readable name for this instance group.

Return type

str

instances()

The instances in the order they should be shown to the TopLevelLanguageLearner.

Return type

Iterable[Tuple[Optional[~SituationT], ~LinguisticDescriptionT, PerceptualRepresentation[~PerceptionT]]]

class adam.curriculum.AblatedPerceptionSituationsInstanceGroup(name, situations, language_generator, chooser)

Creates a collection of instances by taking an iterable of Situations and deriving the LinguisticDescriptions and PerceptualRepresentations by applying the language_generator and perception_generator, respectively.

name()

A human-readable name for this instance group.

Return type

str

instances()

The instances in the order they should be shown to the TopLevelLanguageLearner.

Return type

Iterable[Tuple[Optional[~SituationT], ~LinguisticDescriptionT, PerceptualRepresentation[~PerceptionT]]]

adam.curriculum.curriculum_utils

adam.curriculum.curriculum_utils.CHOOSER_FACTORY()
adam.curriculum.curriculum_utils.TEST_CHOOSER_FACTORY()
adam.curriculum.curriculum_utils.standard_object(debug_handle, root_node=inanimate-object[inanimate[binary]], *, required_properties=(), banned_properties=i{}, added_properties=i{}, banned_ontology_types=i{})

Preferred method of generating template objects as this automatically prevent liquids and body parts from object selection.

Return type

TemplateObjectVariable

adam.curriculum.curriculum_utils.phase3_standard_object(debug_handle, root_node=thing, *, required_properties=(), banned_properties=i{}, added_properties=i{}, banned_ontology_types=i{})

Preferred method of generating phase 3 template objects as this automatically limits to concepts which are marked as Phase 3 and prevents liquids and body parts from object selection.

Return type

TemplateObjectVariable

adam.curriculum.curriculum_utils.body_part_object(debug_handle, root_node=thing, *, required_properties=(), banned_properties=i{}, added_properties=i{})

Method for generating template objects that are body parts.

Return type

TemplateObjectVariable

adam.curriculum.curriculum_utils.phase1_instances(description, situations, perception_generator=HighLevelSemanticsSituationToDevelopmentalPrimitivePerceptionGenerator(ontology=Ontology(gaila-phase-1), color_perception_mode=<ColorPerceptionMode.CONTINUOUS: 1>, _gaze_strategy=<adam.perception.high_level_semantics_situation_to_developmental_primitive_perception.GazePerceivedPerfectly object>), language_generator=SimpleRuleBasedEnglishLanguageGenerator(_ontology_lexicon=OntologyLexicon(ontology=Ontology(gaila-phase-1), _ontology_node_to_word=i{food[edible]: {LexiconEntry(base_form='food', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='foods', verb_form_sg3_prs=None)}, person[animate[binary],self-moving[binary],can-jump,is-human[binary]]: {LexiconEntry(base_form='person', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='people', verb_form_sg3_prs=None)}, ground: {LexiconEntry(base_form='ground', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='grounds', verb_form_sg3_prs=None)}, animal[animate[binary]]: {LexiconEntry(base_form='animal', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='animals', verb_form_sg3_prs=None)}, Mom: {LexiconEntry(base_form='Mom', part_of_speech=propn, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs=None)}, ball[can-fill-template-slot,person-can-have,rollable,red[can-fill-template-slot],blue[can-fill-template-slot],green[can-fill-template-slot],black[can-fill-template-slot],white[can-fill-template-slot]]: {LexiconEntry(base_form='ball', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='balls', verb_form_sg3_prs=None)}, watermelon[can-fill-template-slot,person-can-have,rollable,green[can-fill-template-slot]]: {LexiconEntry(base_form='watermelon', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='watermelons', verb_form_sg3_prs=None)}, table[flat[binary],can-fill-template-slot,can-have-things-on-them,has-space-under,can-be-sat-on,light-brown[can-fill-template-slot],dark-brown[can-fill-template-slot]]: {LexiconEntry(base_form='table', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='tables', verb_form_sg3_prs=None)}, put: {LexiconEntry(base_form='put', part_of_speech=verb, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs='puts')}, push: {LexiconEntry(base_form='push', part_of_speech=verb, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs='pushes')}, book[can-fill-template-slot,can-have-things-on-them,person-can-have,red[can-fill-template-slot],blue[can-fill-template-slot],green[can-fill-template-slot],integrated-experiment-selector]: {LexiconEntry(base_form='book', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='books', verb_form_sg3_prs=None)}, house[hollow[binary],can-fill-template-slot,red[can-fill-template-slot],blue[can-fill-template-slot],white[can-fill-template-slot]]: {LexiconEntry(base_form='house', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='houses', verb_form_sg3_prs=None)}, car[hollow[binary],can-fill-template-slot,self-moving[binary],can-have-things-on-them,rollable,red[can-fill-template-slot],blue[can-fill-template-slot],black[can-fill-template-slot],white[can-fill-template-slot]]: {LexiconEntry(base_form='car', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='cars', verb_form_sg3_prs=None)}, water[liquid[binary]]: {LexiconEntry(base_form='water', part_of_speech=noun, properties=i{LexiconProperty(name='mass-noun')}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs=None)}, juice[liquid[binary]]: {LexiconEntry(base_form='juice', part_of_speech=noun, properties=i{LexiconProperty(name='mass-noun')}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs=None)}, cup[hollow[binary],can-fill-template-slot,person-can-have,red[can-fill-template-slot],blue[can-fill-template-slot],green[can-fill-template-slot],transparent[can-fill-template-slot],integrated-experiment-selector]: {LexiconEntry(base_form='cup', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='cups', verb_form_sg3_prs=None)}, box[hollow[binary],can-fill-template-slot,can-have-things-on-them,person-can-have,light-brown[can-fill-template-slot]]: {LexiconEntry(base_form='box', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='boxes', verb_form_sg3_prs=None)}, chair[can-fill-template-slot,can-have-things-on-them,can-be-sat-on,light-brown[can-fill-template-slot],dark-brown[can-fill-template-slot]]: {LexiconEntry(base_form='chair', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='chairs', verb_form_sg3_prs=None)}, head[hollow[binary],can-fill-template-slot,can-have-things-on-them,is-body-part]: {LexiconEntry(base_form='head', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='heads', verb_form_sg3_prs=None)}, paper[flat[binary],can-fill-template-slot,person-can-have,can-have-things-on-them,white[can-fill-template-slot],two-dimensional[binary]]: {LexiconEntry(base_form='paper', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='papers', verb_form_sg3_prs=None)}, bear[can-fill-template-slot,can-jump,black[can-fill-template-slot],dark-brown[can-fill-template-slot],integrated-experiment-selector]: {LexiconEntry(base_form='bear', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='bears', verb_form_sg3_prs=None)}, cow[can-fill-template-slot,can-have-things-on-them,can-jump,white[can-fill-template-slot],black[can-fill-template-slot]]: {LexiconEntry(base_form='cow', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='cows', verb_form_sg3_prs=None)}, beef[can-fill-template-slot,person-can-have,red[can-fill-template-slot],dark-brown[can-fill-template-slot]]: {LexiconEntry(base_form='beef', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='beefs', verb_form_sg3_prs=None)}, chicken[can-fill-template-slot,edible,dark-brown[can-fill-template-slot],light-brown[can-fill-template-slot]]: {LexiconEntry(base_form='chicken', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='chickens', verb_form_sg3_prs=None)}, bed[flat[binary],can-fill-template-slot,can-have-things-on-them,has-space-under,can-be-sat-on,white[can-fill-template-slot],blue[can-fill-template-slot],black[can-fill-template-slot],light-brown[can-fill-template-slot],dark-brown[can-fill-template-slot]]: {LexiconEntry(base_form='bed', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='beds', verb_form_sg3_prs=None)}, milk[liquid[binary]]: {LexiconEntry(base_form='milk', part_of_speech=noun, properties=i{LexiconProperty(name='mass-noun')}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs=None)}, hand[can-fill-template-slot,can-manipulate-objects,is-body-part,animate[binary]]: {LexiconEntry(base_form='hand', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='hands', verb_form_sg3_prs=None)}, truck[blue[can-fill-template-slot],red[can-fill-template-slot],hollow[binary],can-fill-template-slot,self-moving[binary],can-have-things-on-them]: {LexiconEntry(base_form='truck', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='trucks', verb_form_sg3_prs=None)}, door[can-fill-template-slot,light-brown[can-fill-template-slot],dark-brown[can-fill-template-slot]]: {LexiconEntry(base_form='door', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='doors', verb_form_sg3_prs=None)}, hat[can-fill-template-slot,person-can-have,black[can-fill-template-slot]]: {LexiconEntry(base_form='hat', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='hats', verb_form_sg3_prs=None)}, cookie[can-fill-template-slot,person-can-have,rollable,light-brown[can-fill-template-slot]]: {LexiconEntry(base_form='cookie', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='cookies', verb_form_sg3_prs=None)}, Dad: {LexiconEntry(base_form='Dad', part_of_speech=propn, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs=None)}, baby: {LexiconEntry(base_form='baby', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='babies', verb_form_sg3_prs=None)}, dog[can-fill-template-slot,can-jump,black[can-fill-template-slot],white[can-fill-template-slot],light-brown[can-fill-template-slot],dark-brown[can-fill-template-slot],integrated-experiment-selector]: {LexiconEntry(base_form='dog', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='dogs', verb_form_sg3_prs=None)}, cat[can-fill-template-slot,can-jump,black[can-fill-template-slot],white[can-fill-template-slot],light-brown[can-fill-template-slot],dark-brown[can-fill-template-slot],integrated-experiment-selector]: {LexiconEntry(base_form='cat', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='cats', verb_form_sg3_prs=None)}, bird[can-fill-template-slot,can-fly,red[can-fill-template-slot],blue[can-fill-template-slot],black[can-fill-template-slot],white[can-fill-template-slot],integrated-experiment-selector]: {LexiconEntry(base_form='bird', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='birds', verb_form_sg3_prs=None)}, go: {LexiconEntry(base_form='go', part_of_speech=verb, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs='goes')}, come: {LexiconEntry(base_form='come', part_of_speech=verb, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs='comes')}, take: {LexiconEntry(base_form='take', part_of_speech=verb, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs='takes')}, eat: {LexiconEntry(base_form='eat', part_of_speech=verb, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs='eats')}, give[transfer-of-possession]: {LexiconEntry(base_form='give', part_of_speech=verb, properties=i{LexiconProperty(name='allows-ditransitive')}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs='gives')}, spin: {LexiconEntry(base_form='spin', part_of_speech=verb, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs='spins')}, sit: {LexiconEntry(base_form='sit', part_of_speech=verb, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs='sits')}, drink: {LexiconEntry(base_form='drink', part_of_speech=verb, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs='drinks')}, fall: {LexiconEntry(base_form='fall', part_of_speech=verb, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs='falls')}, throw[transfer-of-possession]: {LexiconEntry(base_form='throw', part_of_speech=verb, properties=i{LexiconProperty(name='allows-ditransitive')}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs='throws')}, pass[transfer-of-possession]: {LexiconEntry(base_form='pass', part_of_speech=verb, properties=i{LexiconProperty(name='allows-ditransitive')}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs='passes')}, move: {LexiconEntry(base_form='move', part_of_speech=verb, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs='moves')}, walk: {LexiconEntry(base_form='walk', part_of_speech=verb, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs='walks')}, jump: {LexiconEntry(base_form='jump', part_of_speech=verb, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs='jumps')}, has: {LexiconEntry(base_form='have', part_of_speech=verb, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs='has')}, roll: {LexiconEntry(base_form='roll', part_of_speech=verb, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs='rolls')}, fly: {LexiconEntry(base_form='fly', part_of_speech=verb, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs='flies')}, red[can-fill-template-slot]: {LexiconEntry(base_form='red', part_of_speech=adj, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs=None)}, blue[can-fill-template-slot]: {LexiconEntry(base_form='blue', part_of_speech=adj, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs=None)}, green[can-fill-template-slot]: {LexiconEntry(base_form='green', part_of_speech=adj, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs=None)}, black[can-fill-template-slot]: {LexiconEntry(base_form='black', part_of_speech=adj, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs=None)}, white[can-fill-template-slot]: {LexiconEntry(base_form='white', part_of_speech=adj, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs=None)}, transparent[can-fill-template-slot]: {LexiconEntry(base_form='transparent', part_of_speech=adj, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs=None)}, light-brown[can-fill-template-slot]: {LexiconEntry(base_form='light brown', part_of_speech=adj, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs=None)}, dark-brown[can-fill-template-slot]: {LexiconEntry(base_form='dark brown', part_of_speech=adj, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs=None)}}), _dependency_tree_linearizer=RoleOrderDependencyTreeLinearizer(_head_pos_to_role_order=i{verb: (nsubj, head, iobj, obj, advmod, obl), noun: (case:sptial, nmod:poss, det, nummod, amod, head, mark, isattr, nmod, case:poss), propn: (case:sptial, amod, head, nmod, case:poss)})))

Convenience method for more compactly creating sub-curricula for phase 1.

Return type

InstanceGroup[HighLevelSemanticsSituation, LinearizedDependencyTree, DevelopmentalPrimitivePerceptionFrame]

adam.curriculum.curriculum_utils.phase2_instances(description, situations, perception_generator=HighLevelSemanticsSituationToDevelopmentalPrimitivePerceptionGenerator(ontology=Ontology(gaila-phase-2), color_perception_mode=<ColorPerceptionMode.CONTINUOUS: 1>, _gaze_strategy=<adam.perception.high_level_semantics_situation_to_developmental_primitive_perception.GazePerceivedPerfectly object>), language_generator=SimpleRuleBasedEnglishLanguageGenerator(_ontology_lexicon=OntologyLexicon(ontology=Ontology(gaila-phase-2), _ontology_node_to_word=i{food[edible]: {LexiconEntry(base_form='food', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='foods', verb_form_sg3_prs=None)}, person[animate[binary],self-moving[binary],can-jump,is-human[binary]]: {LexiconEntry(base_form='person', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='people', verb_form_sg3_prs=None)}, ground: {LexiconEntry(base_form='ground', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='grounds', verb_form_sg3_prs=None)}, animal[animate[binary]]: {LexiconEntry(base_form='animal', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='animals', verb_form_sg3_prs=None)}, Mom: {LexiconEntry(base_form='Mom', part_of_speech=propn, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs=None)}, ball[can-fill-template-slot,person-can-have,rollable,red[can-fill-template-slot],blue[can-fill-template-slot],green[can-fill-template-slot],black[can-fill-template-slot],white[can-fill-template-slot]]: {LexiconEntry(base_form='ball', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='balls', verb_form_sg3_prs=None)}, watermelon[can-fill-template-slot,person-can-have,rollable,green[can-fill-template-slot]]: {LexiconEntry(base_form='watermelon', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='watermelons', verb_form_sg3_prs=None)}, table[flat[binary],can-fill-template-slot,can-have-things-on-them,has-space-under,can-be-sat-on,light-brown[can-fill-template-slot],dark-brown[can-fill-template-slot]]: {LexiconEntry(base_form='table', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='tables', verb_form_sg3_prs=None)}, put: {LexiconEntry(base_form='put', part_of_speech=verb, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs='puts')}, push: {LexiconEntry(base_form='push', part_of_speech=verb, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs='pushes')}, book[can-fill-template-slot,can-have-things-on-them,person-can-have,red[can-fill-template-slot],blue[can-fill-template-slot],green[can-fill-template-slot],integrated-experiment-selector]: {LexiconEntry(base_form='book', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='books', verb_form_sg3_prs=None)}, house[hollow[binary],can-fill-template-slot,red[can-fill-template-slot],blue[can-fill-template-slot],white[can-fill-template-slot]]: {LexiconEntry(base_form='house', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='houses', verb_form_sg3_prs=None)}, car[hollow[binary],can-fill-template-slot,self-moving[binary],can-have-things-on-them,rollable,red[can-fill-template-slot],blue[can-fill-template-slot],black[can-fill-template-slot],white[can-fill-template-slot]]: {LexiconEntry(base_form='car', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='cars', verb_form_sg3_prs=None)}, water[liquid[binary]]: {LexiconEntry(base_form='water', part_of_speech=noun, properties=i{LexiconProperty(name='mass-noun')}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs=None)}, juice[liquid[binary]]: {LexiconEntry(base_form='juice', part_of_speech=noun, properties=i{LexiconProperty(name='mass-noun')}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs=None)}, cup[hollow[binary],can-fill-template-slot,person-can-have,red[can-fill-template-slot],blue[can-fill-template-slot],green[can-fill-template-slot],transparent[can-fill-template-slot],integrated-experiment-selector]: {LexiconEntry(base_form='cup', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='cups', verb_form_sg3_prs=None)}, box[hollow[binary],can-fill-template-slot,can-have-things-on-them,person-can-have,light-brown[can-fill-template-slot]]: {LexiconEntry(base_form='box', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='boxes', verb_form_sg3_prs=None)}, chair[can-fill-template-slot,can-have-things-on-them,can-be-sat-on,light-brown[can-fill-template-slot],dark-brown[can-fill-template-slot]]: {LexiconEntry(base_form='chair', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='chairs', verb_form_sg3_prs=None)}, head[hollow[binary],can-fill-template-slot,can-have-things-on-them,is-body-part]: {LexiconEntry(base_form='head', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='heads', verb_form_sg3_prs=None)}, paper[flat[binary],can-fill-template-slot,person-can-have,can-have-things-on-them,white[can-fill-template-slot],two-dimensional[binary]]: {LexiconEntry(base_form='paper', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='papers', verb_form_sg3_prs=None)}, bear[can-fill-template-slot,can-jump,black[can-fill-template-slot],dark-brown[can-fill-template-slot],integrated-experiment-selector]: {LexiconEntry(base_form='bear', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='bears', verb_form_sg3_prs=None)}, cow[can-fill-template-slot,can-have-things-on-them,can-jump,white[can-fill-template-slot],black[can-fill-template-slot]]: {LexiconEntry(base_form='cow', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='cows', verb_form_sg3_prs=None)}, beef[can-fill-template-slot,person-can-have,red[can-fill-template-slot],dark-brown[can-fill-template-slot]]: {LexiconEntry(base_form='beef', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='beefs', verb_form_sg3_prs=None)}, chicken[can-fill-template-slot,edible,dark-brown[can-fill-template-slot],light-brown[can-fill-template-slot]]: {LexiconEntry(base_form='chicken', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='chickens', verb_form_sg3_prs=None)}, bed[flat[binary],can-fill-template-slot,can-have-things-on-them,has-space-under,can-be-sat-on,white[can-fill-template-slot],blue[can-fill-template-slot],black[can-fill-template-slot],light-brown[can-fill-template-slot],dark-brown[can-fill-template-slot]]: {LexiconEntry(base_form='bed', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='beds', verb_form_sg3_prs=None)}, milk[liquid[binary]]: {LexiconEntry(base_form='milk', part_of_speech=noun, properties=i{LexiconProperty(name='mass-noun')}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs=None)}, hand[can-fill-template-slot,can-manipulate-objects,is-body-part,animate[binary]]: {LexiconEntry(base_form='hand', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='hands', verb_form_sg3_prs=None)}, truck[blue[can-fill-template-slot],red[can-fill-template-slot],hollow[binary],can-fill-template-slot,self-moving[binary],can-have-things-on-them]: {LexiconEntry(base_form='truck', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='trucks', verb_form_sg3_prs=None)}, door[can-fill-template-slot,light-brown[can-fill-template-slot],dark-brown[can-fill-template-slot]]: {LexiconEntry(base_form='door', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='doors', verb_form_sg3_prs=None)}, hat[can-fill-template-slot,person-can-have,black[can-fill-template-slot]]: {LexiconEntry(base_form='hat', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='hats', verb_form_sg3_prs=None)}, cookie[can-fill-template-slot,person-can-have,rollable,light-brown[can-fill-template-slot]]: {LexiconEntry(base_form='cookie', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='cookies', verb_form_sg3_prs=None)}, Dad: {LexiconEntry(base_form='Dad', part_of_speech=propn, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs=None)}, baby: {LexiconEntry(base_form='baby', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='babies', verb_form_sg3_prs=None)}, dog[can-fill-template-slot,can-jump,black[can-fill-template-slot],white[can-fill-template-slot],light-brown[can-fill-template-slot],dark-brown[can-fill-template-slot],integrated-experiment-selector]: {LexiconEntry(base_form='dog', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='dogs', verb_form_sg3_prs=None)}, cat[can-fill-template-slot,can-jump,black[can-fill-template-slot],white[can-fill-template-slot],light-brown[can-fill-template-slot],dark-brown[can-fill-template-slot],integrated-experiment-selector]: {LexiconEntry(base_form='cat', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='cats', verb_form_sg3_prs=None)}, bird[can-fill-template-slot,can-fly,red[can-fill-template-slot],blue[can-fill-template-slot],black[can-fill-template-slot],white[can-fill-template-slot],integrated-experiment-selector]: {LexiconEntry(base_form='bird', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='birds', verb_form_sg3_prs=None)}, go: {LexiconEntry(base_form='go', part_of_speech=verb, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs='goes')}, come: {LexiconEntry(base_form='come', part_of_speech=verb, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs='comes')}, take: {LexiconEntry(base_form='take', part_of_speech=verb, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs='takes')}, eat: {LexiconEntry(base_form='eat', part_of_speech=verb, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs='eats')}, give[transfer-of-possession]: {LexiconEntry(base_form='give', part_of_speech=verb, properties=i{LexiconProperty(name='allows-ditransitive')}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs='gives')}, spin: {LexiconEntry(base_form='spin', part_of_speech=verb, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs='spins')}, sit: {LexiconEntry(base_form='sit', part_of_speech=verb, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs='sits')}, drink: {LexiconEntry(base_form='drink', part_of_speech=verb, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs='drinks')}, fall: {LexiconEntry(base_form='fall', part_of_speech=verb, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs='falls')}, throw[transfer-of-possession]: {LexiconEntry(base_form='throw', part_of_speech=verb, properties=i{LexiconProperty(name='allows-ditransitive')}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs='throws')}, pass[transfer-of-possession]: {LexiconEntry(base_form='pass', part_of_speech=verb, properties=i{LexiconProperty(name='allows-ditransitive')}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs='passes')}, move: {LexiconEntry(base_form='move', part_of_speech=verb, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs='moves')}, walk: {LexiconEntry(base_form='walk', part_of_speech=verb, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs='walks')}, jump: {LexiconEntry(base_form='jump', part_of_speech=verb, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs='jumps')}, has: {LexiconEntry(base_form='have', part_of_speech=verb, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs='has')}, roll: {LexiconEntry(base_form='roll', part_of_speech=verb, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs='rolls')}, fly: {LexiconEntry(base_form='fly', part_of_speech=verb, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs='flies')}, red[can-fill-template-slot]: {LexiconEntry(base_form='red', part_of_speech=adj, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs=None)}, blue[can-fill-template-slot]: {LexiconEntry(base_form='blue', part_of_speech=adj, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs=None)}, green[can-fill-template-slot]: {LexiconEntry(base_form='green', part_of_speech=adj, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs=None)}, black[can-fill-template-slot]: {LexiconEntry(base_form='black', part_of_speech=adj, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs=None)}, white[can-fill-template-slot]: {LexiconEntry(base_form='white', part_of_speech=adj, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs=None)}, transparent[can-fill-template-slot]: {LexiconEntry(base_form='transparent', part_of_speech=adj, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs=None)}, light-brown[can-fill-template-slot]: {LexiconEntry(base_form='light brown', part_of_speech=adj, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs=None)}, dark-brown[can-fill-template-slot]: {LexiconEntry(base_form='dark brown', part_of_speech=adj, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs=None)}, cup-2[hollow[binary],can-fill-template-slot,person-can-have,red[can-fill-template-slot],blue[can-fill-template-slot],green[can-fill-template-slot],transparent[can-fill-template-slot]]: {LexiconEntry(base_form='cup', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='cups', verb_form_sg3_prs=None)}, cup-3[hollow[binary],can-fill-template-slot,person-can-have,red[can-fill-template-slot],blue[can-fill-template-slot],green[can-fill-template-slot],transparent[can-fill-template-slot]]: {LexiconEntry(base_form='cup', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='cups', verb_form_sg3_prs=None)}, cup-4[hollow[binary],can-fill-template-slot,person-can-have,red[can-fill-template-slot],blue[can-fill-template-slot],green[can-fill-template-slot],transparent[can-fill-template-slot]]: {LexiconEntry(base_form='cup', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='cups', verb_form_sg3_prs=None)}, chair-2[can-fill-template-slot,can-have-things-on-them,can-be-sat-on,light-brown[can-fill-template-slot],dark-brown[can-fill-template-slot]]: {LexiconEntry(base_form='chair', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='chairs', verb_form_sg3_prs=None)}, chair-3[can-fill-template-slot,can-have-things-on-them,can-be-sat-on,light-brown[can-fill-template-slot],dark-brown[can-fill-template-slot]]: {LexiconEntry(base_form='chair', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='chairs', verb_form_sg3_prs=None)}, chair-4[can-fill-template-slot,can-have-things-on-them,can-be-sat-on,light-brown[can-fill-template-slot],dark-brown[can-fill-template-slot]]: {LexiconEntry(base_form='chair', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='chairs', verb_form_sg3_prs=None)}, chair-5[can-fill-template-slot,can-have-things-on-them,can-be-sat-on,light-brown[can-fill-template-slot],dark-brown[can-fill-template-slot]]: {LexiconEntry(base_form='chair', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='chairs', verb_form_sg3_prs=None)}}), _dependency_tree_linearizer=RoleOrderDependencyTreeLinearizer(_head_pos_to_role_order=i{verb: (nsubj, head, iobj, obj, advmod, obl), noun: (case:sptial, nmod:poss, det, nummod, amod, head, mark, isattr, nmod, case:poss), propn: (case:sptial, amod, head, nmod, case:poss)})))

Convenience method for more compactly creating sub-curricula for phase 2.

Return type

InstanceGroup[HighLevelSemanticsSituation, LinearizedDependencyTree, DevelopmentalPrimitivePerceptionFrame]

adam.curriculum.curriculum_utils.phase3_instances(description, situations, language_generator=SimpleRuleBasedEnglishLanguageGenerator(_ontology_lexicon=OntologyLexicon(ontology=Ontology(gaila-phase-3), _ontology_node_to_word=i{apple[can-fill-template-slot,person-can-have]: {LexiconEntry(base_form='apple', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='apples', verb_form_sg3_prs=None)}, orange[can-fill-template-slot,person-can-have]: {LexiconEntry(base_form='orange', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='oranges', verb_form_sg3_prs=None)}, banana[can-fill-template-slot,person-can-have]: {LexiconEntry(base_form='banana', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='bananas', verb_form_sg3_prs=None)}, ball[can-fill-template-slot,person-can-have,rollable,red[can-fill-template-slot],blue[can-fill-template-slot],green[can-fill-template-slot],black[can-fill-template-slot],white[can-fill-template-slot]]: {LexiconEntry(base_form='ball', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='balls', verb_form_sg3_prs=None)}, book[can-fill-template-slot,can-have-things-on-them,person-can-have,red[can-fill-template-slot],blue[can-fill-template-slot],green[can-fill-template-slot],integrated-experiment-selector]: {LexiconEntry(base_form='book', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='books', verb_form_sg3_prs=None)}, table[flat[binary],can-fill-template-slot,can-have-things-on-them,has-space-under,can-be-sat-on,light-brown[can-fill-template-slot],dark-brown[can-fill-template-slot]]: {LexiconEntry(base_form='table', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='tables', verb_form_sg3_prs=None)}, chair[can-fill-template-slot,can-have-things-on-them,can-be-sat-on,light-brown[can-fill-template-slot],dark-brown[can-fill-template-slot]]: {LexiconEntry(base_form='chair', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='chairs', verb_form_sg3_prs=None)}, sofa[can-fill-template-slot]: {LexiconEntry(base_form='sofa', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='sofas', verb_form_sg3_prs=None)}, block[person-can-have]: {LexiconEntry(base_form='block', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='blocks', verb_form_sg3_prs=None)}, pyramid[can-fill-template-slot,person-can-have,triangular[can-fill-template-slot]]: {LexiconEntry(base_form='pyramid', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='pyramids', verb_form_sg3_prs=None)}, sphere[can-fill-template-slot,person-can-have,spherical[can-fill-template-slot]]: {LexiconEntry(base_form='sphere', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='spheres', verb_form_sg3_prs=None)}, cube[can-fill-template-slot,person-can-have,cubic[can-fill-template-slot]]: {LexiconEntry(base_form='cube', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='cubes', verb_form_sg3_prs=None)}, box[hollow[binary],can-fill-template-slot,can-have-things-on-them,person-can-have,light-brown[can-fill-template-slot]]: {LexiconEntry(base_form='box', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='boxes', verb_form_sg3_prs=None)}, floor[can-fill-template-slot]: {LexiconEntry(base_form='floor', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='floors', verb_form_sg3_prs=None)}, window[can-fill-template-slot]: {LexiconEntry(base_form='window', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='windows', verb_form_sg3_prs=None)}, cup[hollow[binary],can-fill-template-slot,person-can-have,red[can-fill-template-slot],blue[can-fill-template-slot],green[can-fill-template-slot],transparent[can-fill-template-slot],integrated-experiment-selector]: {LexiconEntry(base_form='cup', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='cups', verb_form_sg3_prs=None)}, mug[can-fill-template-slot,person-can-have]: {LexiconEntry(base_form='mug', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='mugs', verb_form_sg3_prs=None)}, paper[flat[binary],can-fill-template-slot,person-can-have,can-have-things-on-them,white[can-fill-template-slot],two-dimensional[binary]]: {LexiconEntry(base_form='paper', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='papers', verb_form_sg3_prs=None)}, desk[can-fill-template-slot]: {LexiconEntry(base_form='desk', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='desks', verb_form_sg3_prs=None)}, toy_truck[can-fill-template-slot,person-can-have]: {LexiconEntry(base_form='truck', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='trucks', verb_form_sg3_prs=None)}, toy_sedan[can-fill-template-slot,person-can-have]: {LexiconEntry(base_form='car', part_of_speech=noun, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form='cars', verb_form_sg3_prs=None)}, triangular[can-fill-template-slot]: {LexiconEntry(base_form='triangular', part_of_speech=adj, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs=None)}, spherical[can-fill-template-slot]: {LexiconEntry(base_form='spherical', part_of_speech=adj, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs=None)}, cubic[can-fill-template-slot]: {LexiconEntry(base_form='cubic', part_of_speech=adj, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs=None)}, red[can-fill-template-slot]: {LexiconEntry(base_form='red', part_of_speech=adj, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs=None)}, blue[can-fill-template-slot]: {LexiconEntry(base_form='blue', part_of_speech=adj, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs=None)}, green[can-fill-template-slot]: {LexiconEntry(base_form='green', part_of_speech=adj, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs=None)}, black[can-fill-template-slot]: {LexiconEntry(base_form='black', part_of_speech=adj, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs=None)}, white[can-fill-template-slot]: {LexiconEntry(base_form='white', part_of_speech=adj, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs=None)}, brown[can-fill-template-slot]: {LexiconEntry(base_form='brown', part_of_speech=adj, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs=None)}, purple[can-fill-template-slot]: {LexiconEntry(base_form='purple', part_of_speech=adj, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs=None)}, yellow[can-fill-template-slot]: {LexiconEntry(base_form='yellow', part_of_speech=adj, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs=None)}, orange-color[can-fill-template-slot]: {LexiconEntry(base_form='orange', part_of_speech=adj, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs=None)}, plastic[can-fill-template-slot]: {LexiconEntry(base_form='plastic', part_of_speech=adj, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs=None)}, wood[can-fill-template-slot]: {LexiconEntry(base_form='wood', part_of_speech=adj, properties=i{}, intrinsic_morphosyntactic_properties=i{}, counting_classifier=None, plural_form=None, verb_form_sg3_prs=None)}}), _dependency_tree_linearizer=RoleOrderDependencyTreeLinearizer(_head_pos_to_role_order=i{verb: (nsubj, head, iobj, obj, advmod, obl), noun: (case:sptial, nmod:poss, det, nummod, amod, head, mark, isattr, nmod, case:poss), propn: (case:sptial, amod, head, nmod, case:poss)})))

Convenience method for more compactly creating sub-curricula for phase 3.

Return type

InstanceGroup[HighLevelSemanticsSituation, LinearizedDependencyTree, PerceptualRepresentationFrame]

adam.curriculum.curriculum_utils.make_background(salient, all_objects)

Convenience method for determining which objects in the situation should be background objects

Return type

Iterable[TemplateObjectVariable]

adam.curriculum.curriculum_utils.make_noise_objects(noise_objects, banned_ontology_types=i{})
Return type

Iterable[TemplateObjectVariable]

adam.curriculum.curriculum_utils.learner_template_factory()
Return type

TemplateObjectVariable

adam.curriculum.curriculum_utils.shuffle_curriculum(curriculum, *, rng)
Return type

Sequence[Tuple[~SituationT, ~LinguisticDescriptionT, PerceptualRepresentation[~PerceptionT]]]

adam.curriculum.curriculum_utils.background_relations_builder(background_objects, num_relations, *, target=None, target_2=None, add_noise=True, include_targets_in_noise=False, chooser=RandomChooser(_random=<random.Random object>))
Return type

Iterable[Relation[Any]]

Supporting classes: Experiments

adam.experiment.observer

class adam.experiment.observer.DescriptionObserver(*args, **kwds)

Something which can observe the descriptions produced by TopLevelLanguageLearners.

Typically a DescriptionObserver will provide some sort of summary of its observations when its report method is called.

abstract observe(situation, true_description, perceptual_representation, predicted_scene_description)

Observe a description provided by a TopLevelLanguageLearner.

Parameters
Return type

None

abstract report()

Take some action based on the observations.

Typically, this will be to write a report either to the console, to a file, or both.

Return type

None

class adam.experiment.observer.TopChoiceExactMatchObserver(name)

Log how often the top-scoring predicted LinguisticDescription for a Situation exactly matches the expected LinguisticDescription.

If there are multiple predicted LinguisticDescriptions with the same score, which one is compared to determine matching is undefined.

name: str
observe(situation, true_description, perceptual_representation, predicted_scene_description)

Observe a description provided by a TopLevelLanguageLearner.

Parameters
Return type

None

report()

Take some action based on the observations.

Typically, this will be to write a report either to the console, to a file, or both.

Return type

None

class adam.experiment.observer.CandidateAccuracyObserver(name, accuracy_to_txt=False, txt_path='accuracy_out.txt')

Log how often the ‘gold’ description is present in the learner’s candidate descriptions. Provide an accuracy score.

name: str
accuracy_to_txt: bool
txt_path: str
observe(situation, true_description, perceptual_representation, predicted_scene_description)

Observe a description provided by a TopLevelLanguageLearner.

Parameters
Return type

None

report()

Take some action based on the observations.

Typically, this will be to write a report either to the console, to a file, or both.

Return type

None

accuracy()

Return accuracy value of the number of predictions made where the ‘gold’ description was present in the learner’s candidate descriptions. Returns: accuracy score (float)

Return type

Optional[float]

class adam.experiment.observer.PrecisionRecallObserver(name, *, make_report=False, txt_path='accuracy_out.txt', robust=True)

Log information to calculate the learners precision and recall

name: str
make_report: bool
txt_path: str
robust: bool
observe(situation, true_description, perceptual_representation, predicted_scene_description)

Observe a description provided by a TopLevelLanguageLearner.

Parameters
Return type

None

report()

Take some action based on the observations.

Typically, this will be to write a report either to the console, to a file, or both.

Return type

None

precision()
Return type

Optional[float]

recall()
Return type

Optional[float]

class adam.experiment.observer.LearningProgressHtmlLogger(output_file_str, html_dumper, *, include_links_to_images=False, num_pretty_descriptions=3, sort_by_length=False)
output_file_str: str
html_dumper: adam.curriculum_to_html.CurriculumToHtmlDumper
pre_observed_description: Optional[str]
static create_logger(params)
Return type

LearningProgressHtmlLogger

pre_observer(*, params=Parameters(_data=i{}, namespace_prefix=()), experiment_group_dir=None)
Return type

DescriptionObserver

post_observer(*, params=Parameters(_data=i{}, namespace_prefix=()), experiment_group_dir=None)
Return type

DescriptionObserver

test_observer(*, params=Parameters(_data=i{}, namespace_prefix=()), experiment_group_dir=None)
Return type

DescriptionObserver

pre_observer_log(predicted_descriptions, accuracy=None, precision=None, recall=None)
Return type

None

post_observer_log(*, observer_name, instance_number, situation, true_description, perceptual_representation, predicted_descriptions, test_mode, accuracy=None, precision=None, recall=None)
class adam.experiment.observer.HTMLLoggerPreObserver(name, *, html_logger, candidate_accuracy_observer, precision_recall_observer)

Logs the true description and learner’s descriptions throughout the learning process.

name: str
html_logger: adam.experiment.observer.LearningProgressHtmlLogger
candidate_accuracy_observer: Optional[adam.experiment.observer.CandidateAccuracyObserver]
precision_recall_observer: Optional[adam.experiment.observer.PrecisionRecallObserver]
observe(situation, true_description, perceptual_representation, predicted_scene_description)

Observe a description provided by a TopLevelLanguageLearner.

Parameters
Return type

None

report()

Take some action based on the observations.

Typically, this will be to write a report either to the console, to a file, or both.

Return type

None

class adam.experiment.observer.HTMLLoggerPostObserver(name, *, html_logger, candidate_accuracy_observer, precision_recall_observer, test_mode, counter=0)

Logs the true description and learner’s descriptions throughout the learning process.

name: str
html_logger: adam.experiment.observer.LearningProgressHtmlLogger
candidate_accuracy_observer
precision_recall_observer
test_mode: bool
counter: int
observe(situation, true_description, perceptual_representation, predicted_scene_description)

Observe a description provided by a TopLevelLanguageLearner.

Parameters
Return type

None

report()

Take some action based on the observations.

Typically, this will be to write a report either to the console, to a file, or both.

Return type

None

class adam.experiment.observer.YAMLLogger(name, experiment_path, file_name=None, *, counter=0, copy_curriculum=True)
name: str
experiment_path: pathlib.Path
counter: int
copy_curriculum: bool
file_name: Optional[str]
static from_params(name, params)
observe(situation, true_description, perceptual_representation, predicted_scene_description)

Observe a description provided by a TopLevelLanguageLearner.

Parameters
Return type

None

report()

Take some action based on the observations.

Typically, this will be to write a report either to the console, to a file, or both.

Return type

None

adam.experiment.observer.pretty_descriptions(descriptions, num_descriptions, *, sort_by_length)
Return type

str

Other Code

adam.ui

Interfaces for supporting user interfaces, such as for the DARPA demos.

class adam.ui.UserInterface

Interface to represent user interfaces for demonstrations.

This is just a placeholder for now.

adam.math_3d

Contains math utilities for working in three-dimensional space.

We are not going to work with very large 3D models, so this is no optimized for speed.

class adam.math_3d.Point(x, y, z)

A point in 3D space.

x: float
y: float
z: float

adam.random_utils

Utilities for working with random numbers.

This currently contains only an abstraction over random.choice which makes it easier to test things which make random choices.

class adam.random_utils.SequenceChooser

Abstraction over a strategy for selecting items from a sequence.

choice(elements)

Choose one element from elements using some undefined policy.

Parameters
  • elements (Sequence[~T]) – The sequence of elements to choose from. If this sequence is empty, an

  • raised. (IndexError should be) –

Return type

~T

Returns

One of the elements of elements; no further requirement is defined.

class adam.random_utils.FixedIndexChooser(index_to_choose)

A SequenceChooser which always chooses the element at the given index.

If the fixed index exceeds the length of the supplied (non-empty) sequence, then the element at the fixed index modulo the sequence length is returned.

choice(elements)

Choose one element from elements using some undefined policy.

Parameters
  • elements (Sequence[~T]) – The sequence of elements to choose from. If this sequence is empty, an

  • raised. (IndexError should be) –

Return type

~T

Returns

One of the elements of elements; no further requirement is defined.

class adam.random_utils.RandomChooser(random)
A SequenceChooser which delegates the choice to a contained standard library random number

generator.

choice(elements)

Choose one element from elements using some undefined policy.

Parameters
  • elements (Sequence[~T]) – The sequence of elements to choose from. If this sequence is empty, an

  • raised. (IndexError should be) –

Return type

~T

Returns

One of the elements of elements; no further requirement is defined.

static for_seed(seed=0)

Get a RandomChooser from a random number generator initialized with the specified seed.

Return type

RandomChooser

class adam.random_utils.RotatingIndexChooser

A SequenceChooser which increments the index it chooses after each choice.

If the current index exceeds the length of the supplied (non-empty) sequence, then the element at the current index modulo the sequence length is returned.

choice(elements)

Choose one element from elements using some undefined policy.

Parameters
  • elements (Sequence[~T]) – The sequence of elements to choose from. If this sequence is empty, an

  • raised. (IndexError should be) –

Return type

~T

Returns

One of the elements of elements; no further requirement is defined.

GAILA-Specific

adam.ontology.phase1_ontology

The Ontology for use in ISI’s GAILA Phase 1 effort.

Note that this Ontology is only used for training and testing example generation; the learner has no access to it.

The following will eventually end up here:

  • Objects: mommy, daddy, baby, book, house, car, water, ball, juice, cup, box, chair, head, milk, hand, dog, truck, door, hat, table, cookie, bird

  • Actions/Verbs: go, put, come, take, eat, give, turn, sit, drink, push, fall, throw, move, jump, has (possessive), give, roll, fly

  • Relations, Modifiers, Function Words: basic color terms (red, blue, green, white, black…), one, two, I, me, my, you, your, to, in, on, [beside, behind, in front of, over, under], up, down

adam.ontology.phase1_ontology.subtype(sub, _super)
Return type

None

adam.ontology.phase1_ontology.HOLLOW = hollow[binary]

Whether an object should be though of as empty on the inside. In particular, hollow objects may serve as containers.

Jackendoff and Landau argue this should be regarded as a primitive of object perception.

adam.ontology.phase1_ontology.RECOGNIZED_PARTICULAR_PROPERTY = recognized-particular[binary]

Indicates that a property in the ontology indicates the identity of an object as a known particular object (rather than a class) which is assumed to be known to the TopLevelLanguageLearner. The prototypical cases here are Mom and Dad.

adam.ontology.phase1_ontology.GAZED_AT = gazed-at[binary]

Indicates the object of the focus of the speaker. This is not currently strictly enforced and is implicity generated in the perception step if not explicit in a situation.

adam.ontology.phase1_ontology.LEARNER = learner[is-learner]

We represent the language learner itself in the situation, because the size or position of objects relative to the learner itself may be significant for learning.

adam.ontology.phase1_ontology.on_region(reference_object)
Return type

Region[~_ObjectT]

adam.ontology.phase1_ontology.near_region(reference_object, *, direction=None)
Return type

Region[~_ObjectT]

adam.ontology.phase1_ontology.far_region(reference_object, *, direction=None)
Return type

Region[~_ObjectT]

adam.ontology.phase1_ontology.PART_OF = partOf

A relation indicating that one object is part of another object.

adam.ontology.phase1_ontology.SAME_TYPE = same-type

A relation indicating that one object is bigger than another object.

This is a placeholder for a more sophisticated representation of size: https://github.com/isi-vista/adam/issues/70

adam.ontology.phase1_ontology.SMALLER_THAN_SAME_TYPE = smallerThanSameType

A relation indicating that one object is smaller than another object.

This is a placeholder for a more sophisticated representation of size: https://github.com/isi-vista/adam/issues/70

adam.ontology.phase1_ontology.MUCH_BIGGER_THAN = muchBiggerThan

A relation indicating one axis of a geon is much bigger than another. This should only be used for geon axis, relations, not general object relations.

adam.ontology.phase1_ontology.MUCH_SMALLER_THAN = muchSmallerThan

A relation indicating one axis of a geon is much smaller than another. This should only be used for geon axis, relations, not general object relations.

adam.ontology.phase1_ontology.ABOUT_THE_SAME_SIZE_AS_LEARNER = aboutSameSizeAsLearner

This is for use only when generating perceptions, where we special-case size relations to the learner to also be represented as properties, which makes object learner simpler

adam.ontology.phase1_ontology.spin_around_primary_axis(object_)
adam.ontology.phase1_ontology.is_recognized_particular(ontology, node)
Return type

bool

adam.language_specific.english.english_phase_1_lexicon

adam.curriculum.phase1_curriculum

Curricula for DARPA GAILA Phase 1

adam.curriculum.phase1_curriculum.falling_template(theme, *, lands_on_ground, syntax_hints, spatial_properties=i{}, background)
Return type

Phase1SituationTemplate

adam.curriculum.phase1_curriculum.fall_on_ground_template(theme, *, spatial_properties=i{}, background=i{})
Return type

Phase1SituationTemplate

adam.curriculum.phase1_curriculum.make_fall_templates(background)
Return type

Iterable[Phase1SituationTemplate]

adam.curriculum.phase1_curriculum.make_give_templates(background)
Return type

Iterable[Phase1SituationTemplate]

adam.curriculum.phase1_curriculum.bare_fly(agent, *, up, syntax_hints, spatial_properties=i{}, background=i{})
Return type

Phase1SituationTemplate

adam.curriculum.phase1_curriculum.make_fly_templates(background, banned_ontology_types=i{})
Return type

Iterable[Phase1SituationTemplate]

adam.curriculum.phase1_curriculum.intransitive_roll(agent, surface, *, spatial_properties=i{}, background=i{})
Return type

Phase1SituationTemplate

adam.curriculum.phase1_curriculum.transitive_roll(agent, theme, surface, *, spatial_properties=i{}, background=i{})
Return type

Phase1SituationTemplate

adam.curriculum.phase1_curriculum.transitive_roll_with_surface(agent, theme, surface, *, spatial_properties=i{}, background=i{})
Return type

Phase1SituationTemplate

adam.curriculum.phase1_curriculum.make_roll_templates(noise_objects)
Return type

Sequence[Phase1SituationTemplate]

adam.curriculum.phase1_curriculum.make_transitive_roll_templates(noise_objects)
Return type

Iterable[Phase1SituationTemplate]

adam.curriculum.phase1_curriculum.make_jump_template(agent, *, use_adverbial_path_modifier, spatial_properties=i{}, background=i{})
Return type

Phase1SituationTemplate

adam.curriculum.phase1_curriculum.make_pass_template(agent, theme, goal, *, use_adverbial_path_modifier, operator=None, spatial_properties=i{}, background=i{})
Return type

Phase1SituationTemplate

adam.curriculum.phase1_curriculum.make_jump_templates(noise_objects)
adam.curriculum.phase1_curriculum.make_put_templates(noise_objects)
Return type

Iterable[Phase1SituationTemplate]

adam.curriculum.phase1_curriculum.make_drink_template(agent, liquid, container, noise_objects)
Return type

Phase1SituationTemplate

adam.curriculum.phase1_curriculum.make_drink_from_template(agent, liquid, container, noise_objects)
Return type

Phase1SituationTemplate

adam.curriculum.phase1_curriculum.make_eat_template(agent, patient, background=i{})
Return type

Phase1SituationTemplate

adam.curriculum.phase1_curriculum.make_sit_template_intransitive(agent, sit_surface, noise_objects, *, syntax_hints, surface)
Return type

Phase1SituationTemplate

adam.curriculum.phase1_curriculum.make_sit_transitive(agent, sit_surface, noise_objects, *, syntax_hints, surface)
Return type

Phase1SituationTemplate

adam.curriculum.phase1_curriculum.make_sit_templates(noise_objects, banned_ontology_types=i{})
Return type

Iterable[Phase1SituationTemplate]

adam.curriculum.phase1_curriculum.make_take_template(agent, theme, *, use_adverbial_path_modifier, spatial_properties=None, operator=None, background=i{})
Return type

Phase1SituationTemplate

adam.curriculum.phase1_curriculum.make_walk_run_template(agent, *, use_adverbial_path_modifier, operator=None, spatial_properties=None, background=i{})
Return type

Phase1SituationTemplate

adam.curriculum.phase1_curriculum.bare_move_template(agent, goal_reference, *, spatial_properties=i{}, background=i{})
Return type

Phase1SituationTemplate

adam.curriculum.phase1_curriculum.transitive_move_template(agent, theme, goal_reference, *, spatial_properties=i{}, background=i{})
Return type

Phase1SituationTemplate

adam.curriculum.phase1_curriculum.make_move_templates(noise_objects)
Return type

Iterable[Phase1SituationTemplate]

adam.curriculum.phase1_curriculum.make_spin_templates(noise_objects)
Return type

Iterable[Phase1SituationTemplate]

adam.curriculum.phase1_curriculum.make_go_templates(noise_objects)
Return type

Iterable[Phase1SituationTemplate]

adam.curriculum.phase1_curriculum.make_push_templates(agent, theme, push_surface, push_goal, *, operator=None, use_adverbial_path_modifier, spatial_properties=i{}, background=i{})
Return type

List[Phase1SituationTemplate]

adam.curriculum.phase1_curriculum.throw_on_ground_template(agent, theme, *, spatial_properties=i{}, background=i{})
Return type

Phase1SituationTemplate

adam.curriculum.phase1_curriculum.throw_template(agent, theme, goal, *, spatial_properties=i{}, background=i{})
Return type

Phase1SituationTemplate

adam.curriculum.phase1_curriculum.throw_up_down_template(agent, theme, goal, *, is_up, spatial_properties=i{}, background=i{})
Return type

Phase1SituationTemplate

adam.curriculum.phase1_curriculum.throw_to_template(agent, theme, goal, *, spatial_properties=None, background=i{})
Return type

Phase1SituationTemplate

adam.curriculum.phase1_curriculum.throw_to_region_template(agent, theme, goal, *, spatial_properties=None, background=i{})
Return type

Phase1SituationTemplate

adam.curriculum.phase1_curriculum.make_throw_animacy_templates(noise_objects)
Return type

Iterable[Phase1SituationTemplate]

adam.curriculum.phase1_curriculum.make_throw_templates(noise_objects)
Return type

Iterable[Phase1SituationTemplate]

adam.curriculum.phase1_curriculum.build_gaila_phase1_object_curriculum(num_samples, num_noise_objects, language_generator)

One particular instantiation of the object-learning parts of the curriculum for GAILA Phase 1.

Return type

Sequence[InstanceGroup[HighLevelSemanticsSituation, LinearizedDependencyTree, DevelopmentalPrimitivePerceptionFrame]]

adam.curriculum.phase1_curriculum.build_gaila_plurals_curriculum(num_samples, num_noise_objects, language_generator)
Return type

Sequence[InstanceGroup[HighLevelSemanticsSituation, LinearizedDependencyTree, DevelopmentalPrimitivePerceptionFrame]]

adam.curriculum.phase1_curriculum.build_gaila_generics_curriculum(num_samples, num_noise_objects, language_generator)
Return type

Sequence[InstanceGroup[HighLevelSemanticsSituation, LinearizedDependencyTree, DevelopmentalPrimitivePerceptionFrame]]

adam.curriculum.phase1_curriculum.build_gaila_phase1_attribute_curriculum(num_samples, num_noise_objects, language_generator)

One particular instantiation of the object-learning parts of the curriculum for GAILA Phase 1.

Return type

Sequence[InstanceGroup[HighLevelSemanticsSituation, LinearizedDependencyTree, DevelopmentalPrimitivePerceptionFrame]]

adam.curriculum.phase1_curriculum.build_classifier_curriculum(num_samples, num_noise_objects, language_generator)

One particular instantiation of the Chinese classifier learning curriculum

Return type

Sequence[InstanceGroup[HighLevelSemanticsSituation, LinearizedDependencyTree, DevelopmentalPrimitivePerceptionFrame]]

adam.curriculum.phase1_curriculum.build_gaila_phase1_relation_curriculum(num_samples, num_noise_objects, language_generator)

One particular instantiation of the object-learning parts of the curriculum for GAILA Phase 1.

Return type

Sequence[InstanceGroup[HighLevelSemanticsSituation, LinearizedDependencyTree, DevelopmentalPrimitivePerceptionFrame]]

adam.curriculum.phase1_curriculum.build_gaila_phase1_verb_curriculum(num_samples, num_noise_objects, language_generator)

One particular instantiation of the object-learning parts of the curriculum for GAILA Phase 1.

Return type

Sequence[InstanceGroup[HighLevelSemanticsSituation, LinearizedDependencyTree, DevelopmentalPrimitivePerceptionFrame]]

adam.curriculum.phase1_curriculum.build_gaila_phase_1_curriculum(num_samples, num_noise_objects, language_generator)

One particular instantiation of the curriculum for GAILA Phase 1.

Return type

Sequence[InstanceGroup[HighLevelSemanticsSituation, LinearizedDependencyTree, DevelopmentalPrimitivePerceptionFrame]]

Language-Specific

adam.language_specific

Define language independent properties at the module level

adam.language_specific.english

adam.language_specific.english.DETERMINERS = i{'the', 'a', 'yi1_ge4', 'yi1_jang1', 'yi1_ben3', 'yi1_jyan1', 'yi1_lyang4', 'yi1_bei1', 'yi1_ba3', 'yi1_jr1', 'yi1_shan4', 'yi1_ding3', 'yi1_kwai4', 'yi1_tiao2', 'yi1_zhi1'}

These are determiners we automatically add to the beginning of non-proper English noun phrases. This is a language-specific hack since learning determiners is out of our scope: https://github.com/isi-vista/adam/issues/498

adam.language_specific.english.ENGLISH_BLOCK_DETERMINERS = i{'you', 'me', 'your', 'my', 'the', 'a'}

These words block the addition of the determiners above to English noun phrases.

adam.language_specific.english.english_language_generator

class adam.language_specific.english.english_language_generator.SimpleRuleBasedEnglishLanguageGenerator(*, ontology_lexicon)

A simple rule-based approach for translating HighLevelSemanticsSituations to English dependency trees.

We currently only generate a single possible LinearizedDependencyTree for a given situation.

generate_language(situation, chooser)

Generate a collection of human language descriptions of the given Situation. :type situation: HighLevelSemanticsSituation :param situation: the Situation to describe. :type chooser: SequenceChooser :param chooser: the SequenceChooser to use if any random decisions are required.

Return type

ImmutableSet[LinearizedDependencyTree]

Returns

A LinguisticDescription of that situation.

adam.language_specific.english.english_syntax

Indices and tables