Note
Even though these types are made for public consumption and usage should be encouraged/easily possible it should be noted that these may be moved out to new libraries at various points in the future. If you are using these types without using the rest of this library it is strongly encouraged that you be a vocal proponent of getting these made into isolated libraries (as using these types in this manner is not the expected and/or desired usage).
taskflow.types.entity.
Entity
(kind, name, metadata)[source]¶Bases: object
Entity object that identifies some resource/item/other.
kind – immutable type/kind that identifies this entity (typically unique to a library/application)
Entity.name – immutable name that can be used to uniquely identify this entity among many other entities
metadata – immutable dictionary of metadata that is associated with this entity (and typically has keys/values that further describe this entity)
taskflow.types.failure.
Failure
(exc_info=None, **kwargs)[source]¶Bases: object
An immutable object that represents failure.
Failure objects encapsulate exception information so that they can be re-used later to re-raise, inspect, examine, log, print, serialize, deserialize…
One example where they are depended upon is in the WBE engine. When a
remote worker throws an exception, the WBE based engine will receive that
exception and desire to reraise it to the user/caller of the WBE based
engine for appropriate handling (this matches the behavior of non-remote
engines). To accomplish this a failure object (or a
to_dict()
form) would be sent over the WBE channel
and the WBE based engine would deserialize it and use this objects
reraise()
method to cause an exception that contains
similar/equivalent information as the original exception to be reraised,
allowing the user (or the WBE engine itself) to then handle the worker
failure/exception as they desire.
For those who are curious, here are a few reasons why the original exception itself may not be reraised and instead a reraised wrapped failure exception object will be instead. These explanations are only applicable when a failure object is serialized and deserialized (when it is retained inside the python process that the exception was created in the the original exception can be reraised correctly without issue).
Traceback objects are not serializable/recreatable, since they contain references to stack frames at the location where the exception was raised. When a failure object is serialized and sent across a channel and recreated it is not possible to restore the original traceback and originating stack frames.
The original exception type can not be guaranteed to be found, workers can run code that is not accessible/available when the failure is being deserialized. Even if it was possible to use pickle safely it would not be possible to find the originating exception or associated code in this situation.
The original exception type can not be guaranteed to be constructed in a correct manner. At the time of failure object creation the exception has already been created and the failure object can not assume it has knowledge (or the ability) to recreate the original type of the captured exception (this is especially hard if the original exception was created via a complex process via some custom exception constructor).
The original exception type can not be guaranteed to be constructed in
a safe manner. Importing foreign exception types dynamically can be
problematic when not done correctly and in a safe manner; since failure
objects can capture any exception it would be unsafe to try to import
those exception types namespaces and modules on the receiver side
dynamically (this would create similar issues as the pickle
module in
python has where foreign modules can be imported, causing those modules
to have code ran when this happens, and this can cause issues and
side-effects that the receiver would not have intended to have caused).
TODO(harlowja): use parts of 17911 and the backport at https://pypi.org/project/traceback2/ to (hopefully) simplify the methods and contents of this object…
BASE_EXCEPTIONS
= ('BaseException', 'Exception')¶Root exceptions of all other python exceptions.
SCHEMA
= {'$ref': '#/definitions/cause', 'definitions': {'cause': {'additionalProperties': True, 'properties': {'causes': {'items': {'$ref': '#/definitions/cause'}, 'type': 'array'}, 'exc_args': {'minItems': 0, 'type': 'array'}, 'exc_type_names': {'items': {'type': 'string'}, 'minItems': 1, 'type': 'array'}, 'exception_str': {'type': 'string'}, 'traceback_str': {'type': 'string'}, 'version': {'minimum': 0, 'type': 'integer'}}, 'required': ['exception_str', 'traceback_str', 'exc_type_names'], 'type': 'object'}}}¶Expected failure schema (in json schema format).
matches
(other)[source]¶Checks if another object is equivalent to this object.
checks if another object is equivalent to this object
boolean
exception
¶Exception value, or none if exception value is not present.
Exception value may be lost during serialization.
exception_str
¶String representation of exception.
exception_args
¶Tuple of arguments given to the exception constructor.
exc_info
¶Exception info tuple or none.
the contents of this tuple are (if none, then no contents can be examined).
traceback_str
¶Exception traceback as string.
reraise_if_any
(failures)[source]¶Re-raise exceptions if argument is not empty.
If argument is empty list/tuple/iterator, this method returns
None. If argument is converted into a list with a
single Failure
object in it, that failure is reraised. Else, a
WrappedFailure
exception
is raised with the failure list as causes.
check
(*exc_classes)[source]¶Check if any of exc_classes
caused the failure.
Arguments of this method can be exception types or type names (stings). If captured exception is instance of exception of given type, the corresponding argument is returned. Else, None is returned.
causes
¶Tuple of all inner failure causes of this failure.
NOTE(harlowja): Does not include the current failure (only returns connected causes of this failure, if any). This property is really only useful on 3.x or newer versions of python as older versions do not have associated causes (the tuple will always be empty on 2.x versions of python).
Refer to PEP 3134 and PEP 409 and PEP 415 for what this is examining to find failure causes.
taskflow.types.graph.
Graph
(incoming_graph_data=None, name='')[source]¶Bases: networkx.classes.graph.Graph
A graph subclass with useful utility functions.
taskflow.types.graph.
DiGraph
(incoming_graph_data=None, name='')[source]¶Bases: networkx.classes.digraph.DiGraph
A directed graph subclass with useful utility functions.
get_edge_data
(u, v, default=None)[source]¶Returns a copy of the edge attribute dictionary between (u, v).
NOTE(harlowja): this differs from the networkx get_edge_data() as that function does not return a copy (but returns a reference to the actual edge data).
pformat
()[source]¶Pretty formats your graph into a string.
This pretty formatted string representation includes many useful details about your graph, including; name, type, frozeness, node count, nodes, edge count, edges, graph density and graph cycles (if any).
bfs_predecessors_iter
(n)[source]¶Iterates breadth first over all predecessors of a given node.
This will go through the nodes predecessors, then the predecessor nodes predecessors and so on until no more predecessors are found.
NOTE(harlowja): predecessor cycles (if they exist) will not be iterated over more than once (this prevents infinite iteration).
taskflow.types.graph.
OrderedDiGraph
(incoming_graph_data=None, name='')[source]¶Bases: taskflow.types.graph.DiGraph
A directed graph subclass with useful utility functions.
This derivative retains node, edge, insertion and iteration ordering (so that the iteration order matches the insertion order).
node_dict_factory
¶alias of collections.OrderedDict
adjlist_outer_dict_factory
¶alias of collections.OrderedDict
adjlist_inner_dict_factory
¶alias of collections.OrderedDict
edge_attr_dict_factory
¶alias of collections.OrderedDict
taskflow.types.graph.
OrderedGraph
(incoming_graph_data=None, name='')[source]¶Bases: taskflow.types.graph.Graph
A graph subclass with useful utility functions.
This derivative retains node, edge, insertion and iteration ordering (so that the iteration order matches the insertion order).
node_dict_factory
¶alias of collections.OrderedDict
adjlist_outer_dict_factory
¶alias of collections.OrderedDict
adjlist_inner_dict_factory
¶alias of collections.OrderedDict
edge_attr_dict_factory
¶alias of collections.OrderedDict
taskflow.types.notifier.
Listener
(callback, args=None, kwargs=None, details_filter=None)[source]¶Bases: object
Immutable helper that represents a notification listener/target.
callback
¶Callback (can not be none) to call with event + details.
details_filter
¶Callback (may be none) to call to discard events + details.
kwargs
¶Dictionary of keyword arguments to use in future calls.
args
¶Tuple of positional arguments to use in future calls.
taskflow.types.notifier.
Notifier
[source]¶Bases: object
A notification (pub/sub like) helper class.
It is intended to be used to subscribe to notifications of events occurring as well as allow a entity to post said notifications to any associated subscribers without having either entity care about how this notification occurs.
Not thread-safe when a single notifier is mutated at the same
time by multiple threads. For example having multiple threads call
into register()
or reset()
at the same time could
potentially end badly. It is thread-safe when
only notify()
calls or other read-only actions (like calling
into is_registered()
) are occurring at the same time.
RESERVED_KEYS
= ('details',)¶Keys that can not be used in callbacks arguments
ANY
= '*'¶Kleene star constant that is used to receive all notifications
is_registered
(event_type, callback, details_filter=None)[source]¶Check if a callback is registered.
checks if the callback is registered
boolean
notify
(event_type, details)[source]¶Notify about event occurrence.
All callbacks registered to receive notifications about given
event type will be called. If the provided event type can not be
used to emit notifications (this is checked via
the can_be_registered()
method) then it will silently be
dropped (notification failures are not allowed to cause or
raise exceptions).
event_type – event type that occurred
details (dictionary) – additional event details dictionary passed to callback keyword argument with the same name
register
(event_type, callback, args=None, kwargs=None, details_filter=None)[source]¶Register a callback to be called when event of a given type occurs.
Callback will be called with provided args
and kwargs
and
when event type occurs (or on any event if event_type
equals to
ANY
). It will also get additional keyword argument,
details
, that will hold event details provided to the
notify()
method (if a details filter callback is provided then
the target callback will only be triggered if the details filter
callback returns a truthy value).
event_type – event type input
callback – function callback to be registered.
args (list) – non-keyworded arguments
kwargs (dictionary) – key-value pair arguments
deregister
(event_type, callback, details_filter=None)[source]¶Remove a single listener bound to event event_type
.
event_type – deregister listener bound to event_type
deregister_event
(event_type)[source]¶Remove a group of listeners bound to event event_type
.
event_type – deregister listeners bound to event_type
taskflow.types.notifier.
RestrictedNotifier
(watchable_events, allow_any=True)[source]¶Bases: taskflow.types.notifier.Notifier
A notification class that restricts events registered/triggered.
NOTE(harlowja): This class unlike Notifier
restricts and
disallows registering callbacks for event types that are not declared
when constructing the notifier.
taskflow.types.notifier.
register_deregister
(notifier, event_type, callback=None, args=None, kwargs=None, details_filter=None)[source]¶Context manager that registers a callback, then deregisters on exit.
is different from the behavior of the register
method
which will not accept none as it is not callable…
taskflow.types.sets.
OrderedSet
(iterable=None)[source]¶Bases: collections.abc.Set
, collections.abc.Hashable
A read-only hashable set that retains insertion/initial ordering.
It should work in all existing places that frozenset
is used.
See: https://mail.python.org/pipermail/python-ideas/2009-May/004567.html for an idea thread that may eventually (someday) result in this (or similar) code being included in the mainline python codebase (although the end result of that thread is somewhat discouraging in that regard).
intersection
(*sets)[source]¶Return the intersection of two or more sets as a new set.
(i.e. elements that are common to all of the sets.)
taskflow.types.tree.
FrozenNode
[source]¶Bases: Exception
Exception raised when a frozen node is modified.
taskflow.types.tree.
Node
(item, **kwargs)[source]¶Bases: object
A n-ary node class that can be used to create tree structures.
HORIZONTAL_CONN
= '__'¶Default string used to horizontally connect a node to its
parent (used in pformat()
.).
VERTICAL_CONN
= '|'¶Default string used to vertically connect a node to its
parent (used in pformat()
).
add
(child)[source]¶Adds a child to this node (appends to left of existing children).
NOTE(harlowja): this will also set the childs parent to be this node.
find_first_match
(matcher, only_direct=False, include_self=True)[source]¶Finds the first node that matching callback returns true.
This will search not only this node but also any children nodes (in
depth first order, from right to left) and finally if nothing is
matched then None
is returned instead of a node object.
matcher – callback that takes one positional argument (a node) and returns true if it matches desired node or false if not.
only_direct – only look at current node and its direct children (implies that this does not search using depth first).
include_self – include the current node during searching.
the node that matched (or None
)
find
(item, only_direct=False, include_self=True)[source]¶Returns the first node for an item if it exists in this node.
This will search not only this node but also any children nodes (in
depth first order, from right to left) and finally if nothing is
matched then None
is returned instead of a node object.
item – item to look for.
only_direct – only look at current node and its direct children (implies that this does not search using depth first).
include_self – include the current node during searching.
the node that matched provided item (or None
)
disassociate
()[source]¶Removes this node from its parent (if any).
occurrences of this node that were removed from its parent.
remove
(item, only_direct=False, include_self=True)[source]¶Removes a item from this nodes children.
This will search not only this node but also any children nodes and finally if nothing is found then a value error is raised instead of the normally returned removed node object.
item – item to lookup.
only_direct – only look at current node and its direct children (implies that this does not search using depth first).
include_self – include the current node during searching.
pformat
(stringify_node=None, linesep='\n', vertical_conn='|', horizontal_conn='__', empty_space=' ', starting_prefix='')[source]¶Formats this node + children into a nice string representation.
Example:
>>> from taskflow.types import tree
>>> yahoo = tree.Node("CEO")
>>> yahoo.add(tree.Node("Infra"))
>>> yahoo[0].add(tree.Node("Boss"))
>>> yahoo[0][0].add(tree.Node("Me"))
>>> yahoo.add(tree.Node("Mobile"))
>>> yahoo.add(tree.Node("Mail"))
>>> print(yahoo.pformat())
CEO
|__Infra
| |__Boss
| |__Me
|__Mobile
|__Mail
child_count
(only_direct=True)[source]¶Returns how many children this node has.
This can be either only the direct children of this node or inclusive of all children nodes of this node (children of children and so-on).
NOTE(harlowja): it does not account for the current node in this count.
dfs_iter
(include_self=False, right_to_left=True)[source]¶Depth first iteration (non-recursive) over the child nodes.
bfs_iter
(include_self=False, right_to_left=False)[source]¶Breadth first iteration (non-recursive) over the child nodes.
Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.