Higher Order Functions
Note: Functions taking Tensor
arguments can also take anything accepted by
tf.convert_to_tensor
.
[TOC]
Functional operations.
Higher Order Operators
TensorFlow provides several higher order operators to simplify the common map-reduce programming patterns.
tf.map_fn(fn, elems, dtype=None, parallel_iterations=10, back_prop=True, swap_memory=False, infer_shape=True, name=None)
map on the list of tensors unpacked from elems
on dimension 0.
The simplest version of map
repeatedly applies the callable fn
to a
sequence of elements from first to last. The elements are made of the
tensors unpacked from elems
. dtype
is the data type of the return
value of fn
. Users must provide dtype
if it is different from
the data type of elems
.
Suppose that elems
is unpacked into values
, a list of tensors. The shape
of the result tensor is [values.shape[0]] + fn(values[0]).shape
.
This method also allows multi-arity elems
and output of fn
. If elems
is a (possibly nested) list or tuple of tensors, then each of these tensors
must have a matching first (unpack) dimension. The signature of fn
may
match the structure of elems
. That is, if elems
is
(t1, [t2, t3, [t4, t5]])
, then an appropriate signature for fn
is:
fn = lambda (t1, [t2, t3, [t4, t5]]):
.
Furthermore, fn
may emit a different structure than its input. For example,
fn
may look like: fn = lambda t1: return (t1 + 1, t1 - 1)
. In this case,
the dtype
parameter is not optional: dtype
must be a type or (possibly
nested) tuple of types matching the output of fn
.
Args:
fn
: The callable to be performed. It accepts one argument, which will have the same (possibly nested) structure aselems
. Its output must have the same structure asdtype
if one is provided, otherwise it must have the same structure aselems
.elems
: A tensor or (possibly nested) sequence of tensors, each of which will be unpacked along their first dimension. The nested sequence of the resulting slices will be applied tofn
.dtype
: (optional) The output type(s) offn
. Iffn
returns a structure of Tensors differing from the structure ofelems
, thendtype
is not optional and must have the same structure as the output offn
.parallel_iterations
: (optional) The number of iterations allowed to run in parallel.back_prop
: (optional) True enables support for back propagation.swap_memory
: (optional) True enables GPU-CPU memory swapping.infer_shape
: (optional) False disables tests for consistent output shapes.name
: (optional) Name prefix for the returned tensors.
Returns:
A tensor or (possibly nested) sequence of tensors. Each tensor packs the
results of applying fn
to tensors unpacked from elems
along the first
dimension, from first to last.
Raises:
TypeError
: iffn
is not callable or the structure of the output offn
anddtype
do not match.ValueError
: if the lengths of the output offn
anddtype
do not match.
Examples:
elems = np.array([1, 2, 3, 4, 5, 6])
squares = map_fn(lambda x: x * x, elems)
# squares == [1, 4, 9, 16, 25, 36]
elems = (np.array([1, 2, 3]), np.array([-1, 1, -1]))
alternate = map_fn(lambda x: x[0] * x[1], elems, dtype=tf.int64)
# alternate == [-1, 2, -3]
elems = np.array([1, 2, 3])
alternates = map_fn(lambda x: (x, -x), elems, dtype=(tf.int64, tf.int64))
# alternates[0] == [1, 2, 3]
# alternates[1] == [-1, -2, -3]
tf.foldl(fn, elems, initializer=None, parallel_iterations=10, back_prop=True, swap_memory=False, name=None)
foldl on the list of tensors unpacked from elems
on dimension 0.
This foldl operator repeatedly applies the callable fn
to a sequence
of elements from first to last. The elements are made of the tensors
unpacked from elems
on dimension 0. The callable fn takes two tensors as
arguments. The first argument is the accumulated value computed from the
preceding invocation of fn. If initializer
is None, elems
must contain
at least one element, and its first element is used as the initializer.
Suppose that elems
is unpacked into values
, a list of tensors. The shape
of the result tensor is fn(initializer, values[0]).shape`.
Args:
fn
: The callable to be performed.elems
: A tensor to be unpacked on dimension 0.initializer
: (optional) The initial value for the accumulator.parallel_iterations
: (optional) The number of iterations allowed to run in parallel.back_prop
: (optional) True enables support for back propagation.swap_memory
: (optional) True enables GPU-CPU memory swapping.name
: (optional) Name prefix for the returned tensors.
Returns:
A tensor resulting from applying fn
consecutively to the list of tensors
unpacked from elems
, from first to last.
Raises:
TypeError
: iffn
is not callable.
Example:
elems = [1, 2, 3, 4, 5, 6]
sum = foldl(lambda a, x: a + x, elems)
# sum == 21
tf.foldr(fn, elems, initializer=None, parallel_iterations=10, back_prop=True, swap_memory=False, name=None)
foldr on the list of tensors unpacked from elems
on dimension 0.
This foldr operator repeatedly applies the callable fn
to a sequence
of elements from last to first. The elements are made of the tensors
unpacked from elems
. The callable fn takes two tensors as arguments.
The first argument is the accumulated value computed from the preceding
invocation of fn. If initializer
is None, elems
must contain at least
one element, and its first element is used as the initializer.
Suppose that elems
is unpacked into values
, a list of tensors. The shape
of the result tensor is fn(initializer, values[0]).shape
.
Args:
fn
: The callable to be performed.elems
: A tensor that is unpacked into a sequence of tensors to applyfn
.initializer
: (optional) The initial value for the accumulator.parallel_iterations
: (optional) The number of iterations allowed to run in parallel.back_prop
: (optional) True enables support for back propagation.swap_memory
: (optional) True enables GPU-CPU memory swapping.name
: (optional) Name prefix for the returned tensors.
Returns:
A tensor resulting from applying fn
consecutively to the list of tensors
unpacked from elems
, from last to first.
Raises:
TypeError
: iffn
is not callable.
Example:
elems = [1, 2, 3, 4, 5, 6]
sum = foldr(lambda a, x: a + x, elems)
# sum == 21
tf.scan(fn, elems, initializer=None, parallel_iterations=10, back_prop=True, swap_memory=False, infer_shape=True, name=None)
scan on the list of tensors unpacked from elems
on dimension 0.
The simplest version of scan
repeatedly applies the callable fn
to a
sequence of elements from first to last. The elements are made of the tensors
unpacked from elems
on dimension 0. The callable fn takes two tensors as
arguments. The first argument is the accumulated value computed from the
preceding invocation of fn. If initializer
is None, elems
must contain
at least one element, and its first element is used as the initializer.
Suppose that elems
is unpacked into values
, a list of tensors. The shape
of the result tensor is [len(values)] + fn(initializer, values[0]).shape
.
This method also allows multi-arity elems
and accumulator. If elems
is a (possibly nested) list or tuple of tensors, then each of these tensors
must have a matching first (unpack) dimension. The second argument of
fn
must match the structure of elems
.
If no initializer
is provided, the output structure and dtypes of fn
are assumed to be the same as its input; and in this case, the first
argument of fn
must match the structure of elems
.
If an initializer
is provided, then the output of fn
must have the same
structure as initializer
; and the first argument of fn
must match
this structure.
For example, if elems
is (t1, [t2, t3])
and initializer
is
[i1, i2]
then an appropriate signature for fn
in python2
is:
fn = lambda (acc_p1, acc_p2), (t1 [t2, t3]):
and fn
must return a list,
[acc_n1, acc_n2]
. An alternative correct signature for fn
, and the
one that works in python3
, is:
fn = lambda a, t:
, where a
and t
correspond to the input tuples.
Args:
fn
: The callable to be performed. It accepts two arguments. The first will have the same (possibly nested) structure aselems
. The second will have the same structure asinitializer
if one is provided, otherwise it will have the same structure aselems
. Its output must have the same structure asinitializer
if one is provided, otherwise it must have the same structure aselems
.elems
: A tensor or (possibly nested) sequence of tensors, each of which will be unpacked along their first dimension. The nested sequence of the resulting slices will be the first argument tofn
.initializer
: (optional) A tensor or (possibly nested) sequence of tensors, initial value for the accumulator, and the expected output type offn
.parallel_iterations
: (optional) The number of iterations allowed to run in parallel.back_prop
: (optional) True enables support for back propagation.swap_memory
: (optional) True enables GPU-CPU memory swapping.infer_shape
: (optional) False disables tests for consistent output shapes.name
: (optional) Name prefix for the returned tensors.
Returns:
A tensor or (possibly nested) sequence of tensors. Each tensor packs the
results of applying fn
to tensors unpacked from elems
along the first
dimension, and the previous accumulator value(s), from first to last.
Raises:
TypeError
: iffn
is not callable or the structure of the output offn
andinitializer
do not match.ValueError
: if the lengths of the output offn
andinitializer
do not match.
Examples:
elems = np.array([1, 2, 3, 4, 5, 6])
sum = scan(lambda a, x: a + x, elems)
# sum == [1, 3, 6, 10, 15, 21]
elems = np.array([1, 2, 3, 4, 5, 6])
initializer = np.array(0)
sum_one = scan(
lambda a, x: x[0] - x[1] + a, (elems + 1, elems), initializer)
# sum_one == [1, 2, 3, 4, 5, 6]
elems = np.array([1, 0, 0, 0, 0, 0])
initializer = (np.array(0), np.array(1))
fibonaccis = scan(lambda a, _: (a[1], a[0] + a[1]), elems, initializer)
# fibonaccis == ([1, 1, 2, 3, 5, 8], [1, 2, 3, 5, 8, 13])