General Usage¶
class Tensor¶
To achieve the advantages outlined in the introduction above, tensornet introduces a new class called tensornet.tensor.Tensor
. It is important to appreciate the difference between an instance of this class and a simple multi dimensional array from the numpy package numpy.ndarray.
For example, let’s represent the same two-dimensional tensor with a kronecker delta between the two legs (i.e. a diagonal matrix) as a numpy.ndarray
and as a tensornet.tensor.Tensor
object:
In [1]: import numpy as np
In [2]: import tensornet as tn
In [3]: diagonal=np.array([0.12,0.13,0.14])
The numpy.ndarray
object would be:
In [4]: ndarray_A=np.diag(diagonal)
In [5]: ndarray_A
Out[5]:
array([[0.12, 0. , 0. ],
[0. , 0.13, 0. ],
[0. , 0. , 0.14]])
The tensornet.tensor.Tensor
object is:
In [6]: tensor_A=tn.Tensor(diagonal,["W"]).make_delta(1,2,"E")
In [7]: tensor_A
Out[7]:
tensor:
legs = 1:W(3) 2:E(3)
deltas = [{1, 2}]
-1@2-
… where we have given the two legs the names W for west and E for east (optional). In Out[7] we can read 1:W(3), which means that the 1st leg is in the west and has dimension 3. In the following line we can see that the 1st and 2nd leg are connected by a kronecker delta. Finally, there is a little scetch of this tensor indicating the orientation of the two legs. From a tensornet.tensor.Tensor
object, we can always export the corresponding numpy.ndarray
object by using the memberfunction tensornet.tensor.Tensor.get_ndarray.
In [8]: tensor_A.get_ndarray()
Out[8]:
array([[0.12, 0. , 0. ],
[0. , 0.13, 0. ],
[0. , 0. , 0.14]])
However, it is important to note that this is not what is stored internally in the object tensor_A. The current implementation of tensornet stores the diagonal elements and the information that both legs are in fact the same. Future implementations might not even use a numpy object at all for the internal storage. Point is: you simply needn’t care about how those (low level) things are implemented! In fact you shouldn’t care, because it would defeat the advantages of introducing a layer of abstraction.
To remind you of this (and because it is a more natural way of counting) the numbering of the legs start with 1 (and not with 0 as the numpy.ndarray
objects do).
Implementing TN-algorithms¶
The tensornet package allows you to perform each of the following operations in a single line:
function | description |
---|---|
ncon() | contracting a set of given tensors in a given order. |
svd() | split a tensor into two, keeping a specified number of singular values. |
make_delta() | create a leg connected to another with a kronecker delta. |
sum_over() | sum over a single leg (i.e. contract with a all-one-1D-tensor). |
merge_legs() | merge multiple legs to one. |
Also, the following functions are special cases or combinations of the above:
function | description |
---|---|
plug_single() | contracting 1-rank or 2-rank tensor with an arbitrary rank tensor. |
contract_and_svd() | contract two tensors and split them up again. |
Numbering of Tensor Legs¶
Let’s imagine we have a 4 leg tensor named A and we sum over the third leg. The resulting tensor A’ has 3 legs:
| |
| |
-- A -- --> -- A'--
|
|
Now, how do we tell python which leg to do the summation on and which leg is which in the resulting tensor? Three possible sulutions for this come into mind with different advantages and disadvantages.
- Solution 1: We could always specify all the indices involved. This would mean we need to say that 1,2,3,4 would for example map to 1,2,*summed*,3. Then leg 1 and 2 would stay the same, 3 is summed over and 4 becomes the new 3rd leg.
- This is the most general way to specify the legs. However, it is also the most cumbersome, because one needs to specify all the indices involved at every step. This scheme is used in the ncon() and svd() method to spcify how to do a contraction and singular value decomposition repectively.
- Solution 2: We could use the names of legs instead of numbering them.
- This would make naming the legs non-optional, which is why this approach is not used in tensornet.
- Solution 3: We could specify that whenever a leg is inserted or removed, the following are shifted accordingly.
- This is the chosen approach for all functions implemented in tensornet except ncon() and svd() (which use solution 1). The major advantage is that only a minimum of legs need to be specified at each step. Combining this standard with a consistend numbering of legs (see below) results in a compact and simple description of tensor manipulations.
Consistend Numbering of Tensor Legs¶
All functions implemented in tensornet (except ncon() and svd()) shift the numbering of the resulting legs according to the added or removed legs. If, for example, the 3rd leg of a 4 leg tensor is removed the old 4th leg becomes the new 3rd leg. If one always uses the same scheme to number the legs of tensors, knowing which leg is which (1st, 2nd, 3rd,…) becomes very simple. One could, for example, always number the indices in the order: west(W), east(E), south(S), north(N). In this case the leg numbering of the above example would be:
| |
4 3
-1 A 2- --> -1 A'2-
3
|
The corresponding implementation is:
In [1]: import numpy as np
In [2]: import tensornet as tn
In [3]: a=tn.Tensor(np.random.randn(2,3,2,4),["W","E","S","N"],name="A")
In [4]: a
Out[4]:
tensor:
name = A
legs = 1:W(2) 2:E(3) 3:S(2) 4:N(4)
|
4
-1A2-
3
|
In [5]: a.sum_over(3)
Out[5]:
tensor:
name = A
legs = 1:W(2) 2:E(3) 3:N(4)
|
3
-1A2-