vrml.cache
Caching mechanism for scene graph
Basically, the scenegraph Cache allows you to associate
data with particular nodes in the scene graph without
actually attaching the information to the particular
node being annotated.
This allows us to store cached information such as:
display list objects
texture objects
compiled array geometry
The cache uses weak reference key dictionaries to clear
the cache of data associated with deleted nodes within
the scene graph. similarly, cache is cleared when node
fields which are dependencies of the cache are changed.
The cache is based on the vrml.field and dispatch.dispatcher
modules. All field objects by default provide dispatcher
notifications on set and delete, and can provide
notification on get as well. The cache watches for these
notifications for dependent nodes where the dispatcher
messages are (type, fieldObject) for type in ("set","del").
There is a per-context cache, and a global cache, display-
lists, textures, etceteras should be stored in the per-
context cache, while data with dependencies only on the
node can be stored in the global cache.
Note:
The cache makes extensive use of weak references, and
was the failure case which exposed a number of problems
with the Python 2.2.2 weakref and weakkeydictionary
mechanisms.
Functions
Classes
Trivial sub-class of a dict which has some convenience methods
Maps id(client): {
key="": CacheHolder()
}
That is, for each client node, a dictionary
holds opaque key(normally strings) to CacheHolder
instances. The CacheHolder is responsible for
most of the implementation of the cache.
class CacheHolder(
object
):
Holder for data values within a cache
The CacheHolder provides the bulk of the cache
implementation. It associates an opaque data value
with a client node and an opaque key. The key
value allows multiple dimensions of storage, to
allow, for instance storing compiled shadow
information separate from compiled geometry
information.
The depend method uses the dispatcher module
to invalidate this CacheHolder when the given
fields for the given nodes are changed.
Attributes:
client -- weak reference to the client node
key -- strong reference to the opaque key value
data -- strong reference to the opaque data value
cache -- weak reference to the cache in which
we are storing ourselves
__call__(
self
,
signal
= None
,
sender
= None
)
Delete the cached value (this object)
This de-registers ourselves from our cache object,
with suitable checks for whether our cache is still
alive itself, and whether it still has an entry
for our client and our key.
If this is the last registered cache object
for our client, deletes the overall cache dictionary
for the client.
__init__(
self
,
client
,
data
,
key
= ''
,
cache
= {}
)
Initialise the cache-deletion callable
- client
- the node doing the caching, if gc'd, then the entire cache for the node is deleted
- key
- opaque key into the cache's per-node storage
- cache
- the particular cache in which to store ourselves.
depend(
self
,
node
,
field
= None
)
Add a dependency on given node's field value
- source
- the node being watched
- field
- the field on the node being watched
Dependency on the node means that this cache
holder will be invalidated if the field value
changes. This does not create a dependency
on the existence of node, so you should set
the dependency for the field holding any
nodes which should invalidate this CacheHolder
Note:
This does not affect any other CacheHolder
for our client node.