AbstractBlockTensorMap

The main type of this package is BlockTensorMap, which is a generalization of AbstractTensorMap to the case where the tensor is a concatenation of other tensors. The design philosophy is to have the same interface as AbstractTensorMap, with the additional ability to query the individual tensors that make up the block tensor, as AbstractArray{AbstractTensorMap}.

Type hierarchy

The type hierarchy of the AbstractBlockTensorMap consists of one abstract and two concrete subtypes of AbstractBlockTensorMap:

BlockTensorMap <: AbstractBlockTensorMap <: AbstractTensorMap
SparseBlockTensorMap <: AbstractBlockTensorMap <: AbstractTensorMap

In particular, these structures hold the structural information as a HomSpace of SumSpaces, as defined in SumSpaces, as well as the individual tensors that make up the block tensor. For BlockTensorMap, the list of tensors is dense, thus they are stored in an Array{AbstractTensorMap,N}, where N is the total number of indices of a tensor. For SparseBlockTensorMap, this is not the case, and the list of tensors is stored in a Dict{CartesianIndex{N},AbstractTensorMap}.

The elementary constructors for these types are:

BlockTensorMap{TT}(undef, space::TensorMapSumSpace)
SparseBlockTensorMap{TT}(undef, space::TensorMapSumSpace)

where TT<:AbstractTensorMap is the type of the individual tensors, and space is the TensorMapSumSpace that defines the structure of the block tensor.

Note

In rare cases, undef_blocks can also be used, which won't allocate the component tensors. In these cases it is left up to the user to not access elements before they are allocated.

Similarly, they can be initialized from a list of tensors:

BlockTensorMap{TT}(tensors::AbstractArray{AbstractTensorMap,N}, space::TensorMapSumSpace)
SparseBlockTensorMap{TT}(tensors::Dict{CartesianIndex{N},AbstractTensorMap}, space::TensorMapSumSpace)

Typically though, the most convenient way of obtaining a block tensor is by using one of zeros, rand or randn, as well as their sparse counterparts spzeros or sprand.

julia> using TensorKit, BlockTensorKit
julia> using BlockTensorKit: ⊕
julia> V = ℂ^1 ⊕ ℂ^2;
julia> W = V * V → V;
julia> t = rand(W)2×2×2 BlockTensorMap{TensorKit.TensorMap{Float64, TensorKit.ComplexSpace, 1, 2, Vector{Float64}}}((ℂ^1 ⊕ ℂ^2) ← ((ℂ^1 ⊕ ℂ^2) ⊗ (ℂ^1 ⊕ ℂ^2))) with 8 stored entries: [1,1,1] = TensorMap(ℂ^1 ← (ℂ^1 ⊗ ℂ^1)) [2,1,1] = TensorMap(ℂ^2 ← (ℂ^1 ⊗ ℂ^1)) [1,2,1] = TensorMap(ℂ^1 ← (ℂ^2 ⊗ ℂ^1)) [2,2,1] = TensorMap(ℂ^2 ← (ℂ^2 ⊗ ℂ^1)) [1,1,2] = TensorMap(ℂ^1 ← (ℂ^1 ⊗ ℂ^2)) [2,1,2] = TensorMap(ℂ^2 ← (ℂ^1 ⊗ ℂ^2)) [1,2,2] = TensorMap(ℂ^1 ← (ℂ^2 ⊗ ℂ^2)) [2,2,2] = TensorMap(ℂ^2 ← (ℂ^2 ⊗ ℂ^2))
julia> eltype(t)TensorKit.TensorMap{Float64, TensorKit.ComplexSpace, 1, 2, Vector{Float64}}
julia> s = sprand(W, 0.5)2×2×2 SparseBlockTensorMap{TensorKit.TensorMap{Float64, TensorKit.ComplexSpace, 1, 2, Vector{Float64}}}((ℂ^1 ⊕ ℂ^2) ← ((ℂ^1 ⊕ ℂ^2) ⊗ (ℂ^1 ⊕ ℂ^2))) with 5 stored entries: ⎡⠁⠉⎤ ⎣⢀⢀⎦
julia> eltype(s)TensorKit.TensorMap{Float64, TensorKit.ComplexSpace, 1, 2, Vector{Float64}}
Note

In analogy to TensorKit, most of the functionality that requires a space object can equally well be called in terms of codomain(space), domain(space), if that is more convenient.

Indexing

For indexing operators, AbstractBlockTensorMap behaves like an AbstractArray{AbstractTensorMap}, and the individual tensors can be accessed via the getindex and setindex! functions. In particular, the getindex function returns a TT object, and the setindex! function expects a TT object. Both linear and cartesian indexing styles are supported.

julia> t[1] isa eltype(t)true
julia> t[1] == t[1, 1, 1]true
julia> t[2] = 3 * t[2]TensorMap(ℂ^2 ← (ℂ^1 ⊗ ℂ^1)): [:, :, 1] = 2.0588577799052072 1.3573638727539417
julia> s[1] isa eltype(t)true
julia> s[1] == s[1, 1, 1]true
julia> s[1] += 2 * s[1]TensorMap(ℂ^1 ← (ℂ^1 ⊗ ℂ^1)): [:, :, 1] = 0.5433265381618888

Slicing operations are also supported, and the AbstractBlockTensorMap can be sliced in the same way as an AbstractArray{AbstractTensorMap}. There is however one elementary difference: as the slices still contain tensors with the same amount of legs, there can be no reduction in the number of dimensions. In particular, in contrast to AbstractArray, scalar dimensions are not discarded, and as a result, linear index slicing is not allowed.

julia> ndims(t[1, 1, :]) == 3true
julia> ndims(t[:, 1:2, [1, 1]]) == 3true
julia> t[1:2] # errorERROR: ArgumentError: Cannot index TensorKit.TensorMapSpace{TensorKit.ComplexSpace, 1, 2}[ℂ^1 ← (ℂ^1 ⊗ ℂ^1) ℂ^1 ← (ℂ^2 ⊗ ℂ^1); ℂ^2 ← (ℂ^1 ⊗ ℂ^1) ℂ^2 ← (ℂ^2 ⊗ ℂ^1);;; ℂ^1 ← (ℂ^1 ⊗ ℂ^2) ℂ^1 ← (ℂ^2 ⊗ ℂ^2); ℂ^2 ← (ℂ^1 ⊗ ℂ^2) ℂ^2 ← (ℂ^2 ⊗ ℂ^2)] with 1:2

VectorInterface.jl

As part of the TensorKit interface, AbstractBlockTensorMap also implements VectorInterface. This means that you can efficiently add, scale, and compute the inner product of AbstractBlockTensorMap objects.

julia> t1, t2 = rand!(similar(t)), rand!(similar(t));
julia> add(t1, t2, rand())2×2×2 BlockTensorMap{TensorKit.TensorMap{Float64, TensorKit.ComplexSpace, 1, 2, Vector{Float64}}}((ℂ^1 ⊕ ℂ^2) ← ((ℂ^1 ⊕ ℂ^2) ⊗ (ℂ^1 ⊕ ℂ^2))) with 8 stored entries: [1,1,1] = TensorMap(ℂ^1 ← (ℂ^1 ⊗ ℂ^1)) [2,1,1] = TensorMap(ℂ^2 ← (ℂ^1 ⊗ ℂ^1)) [1,2,1] = TensorMap(ℂ^1 ← (ℂ^2 ⊗ ℂ^1)) [2,2,1] = TensorMap(ℂ^2 ← (ℂ^2 ⊗ ℂ^1)) [1,1,2] = TensorMap(ℂ^1 ← (ℂ^1 ⊗ ℂ^2)) [2,1,2] = TensorMap(ℂ^2 ← (ℂ^1 ⊗ ℂ^2)) [1,2,2] = TensorMap(ℂ^1 ← (ℂ^2 ⊗ ℂ^2)) [2,2,2] = TensorMap(ℂ^2 ← (ℂ^2 ⊗ ℂ^2))
julia> scale(t1, rand())2×2×2 BlockTensorMap{TensorKit.TensorMap{Float64, TensorKit.ComplexSpace, 1, 2, Vector{Float64}}}((ℂ^1 ⊕ ℂ^2) ← ((ℂ^1 ⊕ ℂ^2) ⊗ (ℂ^1 ⊕ ℂ^2))) with 8 stored entries: [1,1,1] = TensorMap(ℂ^1 ← (ℂ^1 ⊗ ℂ^1)) [2,1,1] = TensorMap(ℂ^2 ← (ℂ^1 ⊗ ℂ^1)) [1,2,1] = TensorMap(ℂ^1 ← (ℂ^2 ⊗ ℂ^1)) [2,2,1] = TensorMap(ℂ^2 ← (ℂ^2 ⊗ ℂ^1)) [1,1,2] = TensorMap(ℂ^1 ← (ℂ^1 ⊗ ℂ^2)) [2,1,2] = TensorMap(ℂ^2 ← (ℂ^1 ⊗ ℂ^2)) [1,2,2] = TensorMap(ℂ^1 ← (ℂ^2 ⊗ ℂ^2)) [2,2,2] = TensorMap(ℂ^2 ← (ℂ^2 ⊗ ℂ^2))
julia> inner(t1, t2)5.632989272515041

For further in-place and possibly-in-place methods, see VectorInterface.jl

TensorOperations.jl

The TensorOperations.jl interface is also implemented for AbstractBlockTensorMap. In particular, the AbstractBlockTensorMap can be contracted with other AbstractBlockTensorMap objects, as well as with AbstractTensorMap objects. In order for that mix to work, the AbstractTensorMap objects are automatically converted to AbstractBlockTensorMap objects with a single tensor, i.e. the sum spaces will be a sum of one space. As a consequence, as soon as one of the input tensors is blocked, the output tensor will also be blocked, even though its size might be trivial. In these cases, only can be used to retrieve the single element in the BlockTensorMap.

julia> @tensor t3[a; b] := t[a; c d] * conj(t[b; c d])2×2 BlockTensorMap{TensorKit.TensorMap{Float64, TensorKit.ComplexSpace, 1, 1, Vector{Float64}}}((ℂ^1 ⊕ ℂ^2) ← (ℂ^1 ⊕ ℂ^2)) with 4 stored entries:
  [1,1]  =  TensorMap(ℂ^1 ← ℂ^1)
  [2,1]  =  TensorMap(ℂ^2 ← ℂ^1)
  [1,2]  =  TensorMap(ℂ^1 ← ℂ^2)
  [2,2]  =  TensorMap(ℂ^2 ← ℂ^2)
julia> @tensor t4[a; b] := t[1, :, :][a; c d] * conj(t[1, :, :][b; c d]) # blocktensor * blocktensor = blocktensor1×1 BlockTensorMap{TensorKit.TensorMap{Float64, TensorKit.ComplexSpace, 1, 1, Vector{Float64}}}(⊕(ℂ^1) ← ⊕(ℂ^1)) with 1 stored entry: [1,1] = TensorMap(ℂ^1 ← ℂ^1)
julia> t4 isa AbstractBlockTensorMaptrue
julia> only(t4) isa eltype(t4)true
julia> @tensor t5[a; b] := t[1][a; c d] * conj(t[1:1, 1:1, 1:1][b; c d]) # tensor * blocktensor = blocktensor1×1 BlockTensorMap{TensorKit.TensorMap{Float64, TensorKit.ComplexSpace, 1, 1, Vector{Float64}}}(⊕(ℂ^1) ← ⊕(ℂ^1)) with 1 stored entry: [1,1] = TensorMap(ℂ^1 ← ℂ^1)
julia> t5 isa AbstractBlockTensorMaptrue
julia> only(t5) isa eltype(t5)true

Factorizations

Currently, there is only rudimentary support for factorizations of AbstractBlockTensorMap objects. In particular, the implementations are not yet optimized for performance, and the factorizations are typically carried out by mapping to a dense tensor, and then performing the factorization on that tensor.

Note

Most factorizations do not retain the additional imposed block structure. In particular, constructions of orthogonal bases will typically mix up the subspaces, and as such the resulting vector spaces will be SumSpaces of a single term.