AbstractBlockTensorMap
The main type of this package is BlockTensorMap, which is a generalization of AbstractTensorMap to the case where the tensor is a concatenation of other tensors. The design philosophy is to have the same interface as AbstractTensorMap, with the additional ability to query the individual tensors that make up the block tensor, as AbstractArray{AbstractTensorMap}.
Type hierarchy
The type hierarchy of the AbstractBlockTensorMap consists of one abstract and two concrete subtypes of AbstractBlockTensorMap:
BlockTensorMap <: AbstractBlockTensorMap <: AbstractTensorMap
SparseBlockTensorMap <: AbstractBlockTensorMap <: AbstractTensorMapIn particular, these structures hold the structural information as a HomSpace of SumSpaces, as defined in SumSpaces, as well as the individual tensors that make up the block tensor. For BlockTensorMap, the list of tensors is dense, thus they are stored in an Array{AbstractTensorMap,N}, where N is the total number of indices of a tensor. For SparseBlockTensorMap, this is not the case, and the list of tensors is stored in a Dict{CartesianIndex{N},AbstractTensorMap}.
The elementary constructors for these types are:
BlockTensorMap{TT}(undef, space::TensorMapSumSpace)
SparseBlockTensorMap{TT}(undef, space::TensorMapSumSpace)where TT<:AbstractTensorMap is the type of the individual tensors, and space is the TensorMapSumSpace that defines the structure of the block tensor.
In rare cases, undef_blocks can also be used, which won't allocate the component tensors. In these cases it is left up to the user to not access elements before they are allocated.
Similarly, they can be initialized from a list of tensors:
BlockTensorMap{TT}(tensors::AbstractArray{AbstractTensorMap,N}, space::TensorMapSumSpace)
SparseBlockTensorMap{TT}(tensors::Dict{CartesianIndex{N},AbstractTensorMap}, space::TensorMapSumSpace)Typically though, the most convenient way of obtaining a block tensor is by using one of zeros, rand or randn, as well as their sparse counterparts spzeros or sprand.
julia> using TensorKit, BlockTensorKitjulia> using BlockTensorKit: ⊕julia> V = ℂ^1 ⊕ ℂ^2;julia> W = V * V → V;julia> t = rand(W)TensorMap(ℂ^3 ← (ℂ^3 ⊗ ℂ^3)): [:, :, 1] = 0.6773139478612151 0.12356492273896968 0.7381794734760369 0.18735116354257786 0.4754240678659616 0.53607989355374 0.44336040220958717 0.4842661564886982 0.38634942085006163 [:, :, 2] = 0.6886383113196723 0.43742966298414265 0.09816949236999195 0.24992768884096472 0.275875871246671 0.7365603540468533 0.07669005382789551 0.44194381335238375 0.7172947097000264 [:, :, 3] = 0.5318300459164867 0.964599140845226 0.423721955360171 0.5865722256243584 0.1750867384751943 0.3439099552570236 0.31447578972468293 0.523384035043862 0.3516398777603492julia> eltype(t)Float64julia> s = sprand(W, 0.5)ERROR: MethodError: no method matching sprand(::TensorKit.TensorMapSpace{TensorKit.ComplexSpace, 1, 2}, ::Float64) The function `sprand` exists, but no method is defined for this combination of argument types. Closest candidates are: sprand(::TensorKit.HomSpace{SumSpace{S}, TensorKit.ProductSpace{SumSpace{S}, N₁}, TensorKit.ProductSpace{SumSpace{S}, N₂}} where {S, N₁, N₂}, ::Real) @ BlockTensorKit ~/work/BlockTensorKit.jl/BlockTensorKit.jl/src/tensors/sparseblocktensor.jl:126 sprand(::Union{SumSpace{S}, TensorKit.ProductSpace{SumSpace{S}}} where S, ::Real) @ BlockTensorKit ~/work/BlockTensorKit.jl/BlockTensorKit.jl/src/tensors/sparseblocktensor.jl:129 sprand(::Type, ::Union{SumSpace{S}, TensorKit.ProductSpace{SumSpace{S}}} where S, ::Union{SumSpace{S}, TensorKit.ProductSpace{SumSpace{S}}} where S, ::Real) @ BlockTensorKit ~/work/BlockTensorKit.jl/BlockTensorKit.jl/src/tensors/sparseblocktensor.jl:137 ...julia> eltype(s)ERROR: UndefVarError: `s` not defined in `Main` Suggestion: check for spelling errors or missing imports.
In analogy to TensorKit, most of the functionality that requires a space object can equally well be called in terms of codomain(space), domain(space), if that is more convenient.
Indexing
For indexing operators, AbstractBlockTensorMap behaves like an AbstractArray{AbstractTensorMap}, and the individual tensors can be accessed via the getindex and setindex! functions. In particular, the getindex function returns a TT object, and the setindex! function expects a TT object. Both linear and cartesian indexing styles are supported.
julia> t[1] isa eltype(t)truejulia> t[1] == t[1, 1, 1]truejulia> t[2] = 3 * t[2]0.5620534906277336julia> s[1] isa eltype(t)ERROR: UndefVarError: `s` not defined in `Main` Suggestion: check for spelling errors or missing imports.julia> s[1] == s[1, 1, 1]ERROR: UndefVarError: `s` not defined in `Main` Suggestion: check for spelling errors or missing imports.julia> s[1] += 2 * s[1]ERROR: UndefVarError: `s` not defined in `Main` Suggestion: check for spelling errors or missing imports.
Slicing operations are also supported, and the AbstractBlockTensorMap can be sliced in the same way as an AbstractArray{AbstractTensorMap}. There is however one elementary difference: as the slices still contain tensors with the same amount of legs, there can be no reduction in the number of dimensions. In particular, in contrast to AbstractArray, scalar dimensions are not discarded, and as a result, linear index slicing is not allowed.
julia> ndims(t[1, 1, :]) == 3falsejulia> ndims(t[:, 1:2, [1, 1]]) == 3ERROR: MethodError: no method matching getindex(::TensorKit.TensorMap{Float64, TensorKit.ComplexSpace, 1, 2, Vector{Float64}}, ::Colon, ::UnitRange{Int64}, ::Vector{Int64}) The function `getindex` exists, but no method is defined for this combination of argument types. Closest candidates are: getindex(::TensorKit.AbstractTensorMap, ::Union{Colon, AbstractRange{<:Union{Int128, Int16, Int32, Int64, Int8, UInt128, UInt16, UInt32, UInt64, UInt8}}, Int128, Int16, Int32, Int64, Int8, UInt128, UInt16, UInt32, UInt64, UInt8}...) @ TensorKit ~/.julia/packages/TensorKit/gu4dl/src/tensors/abstracttensor.jl:313 getindex(::TensorKit.TensorMap{T, S, N₁, N₂, A} where A<:DenseVector{T}, ::TensorKit.FusionTree{TensorKitSectors.Trivial, N₁}, ::TensorKit.FusionTree{TensorKitSectors.Trivial, N₂}) where {T, S, N₁, N₂} @ TensorKit ~/.julia/packages/TensorKit/gu4dl/src/tensors/tensor.jl:499 getindex(::TensorKit.TensorMap{T, S, N₁, N₂, A} where A<:DenseVector{T}, ::TensorKit.FusionTree{I, N₁}, ::TensorKit.FusionTree{I, N₂}) where {T, S, N₁, N₂, I<:TensorKitSectors.Sector} @ TensorKit ~/.julia/packages/TensorKit/gu4dl/src/tensors/tensor.jl:485 ...julia> t[1:2] # error2-element StridedViews.StridedView{Float64, 1, Memory{Float64}, typeof(identity)}: 0.6773139478612151 0.5620534906277336
VectorInterface.jl
As part of the TensorKit interface, AbstractBlockTensorMap also implements VectorInterface. This means that you can efficiently add, scale, and compute the inner product of AbstractBlockTensorMap objects.
julia> t1, t2 = rand!(similar(t)), rand!(similar(t));julia> add(t1, t2, rand())TensorMap(ℂ^3 ← (ℂ^3 ⊗ ℂ^3)): [:, :, 1] = 0.8034228299001099 1.2624797029089505 0.21063823763342931 1.2918284366081765 0.8610577694173832 0.9439300490535039 0.33342925860125305 0.3372152704142718 0.3603521094619397 [:, :, 2] = 0.21843999698455455 1.1940763819487017 0.9052592150234119 1.2777275500611112 0.36397497709972526 0.4621483491600174 1.1257965707032942 0.2519657385539169 0.14708757133875822 [:, :, 3] = 0.6712548820716343 0.2638561738596002 0.47494308865611756 1.1753052527235381 0.7740942896108789 0.9004627228015795 1.1966327631702574 0.45412449138287436 0.07503770777995195julia> scale(t1, rand())TensorMap(ℂ^3 ← (ℂ^3 ⊗ ℂ^3)): [:, :, 1] = 0.10124657956808862 0.17896251191739349 0.03991042023722448 0.18480405553426157 0.12251063849740539 0.12830016135616776 0.01748976721815049 0.065411061748323 0.041879148514976504 [:, :, 2] = 0.016230821630407465 0.16797923444042562 0.16731214606596811 0.19625654259925215 0.07969702330576744 0.10054642845995283 0.1536276028275326 0.016387161380206173 0.02396622158088009 [:, :, 3] = 0.11746095687399051 0.00482159905776536 0.045568667497235565 0.18687546501753396 0.10423405049993208 0.1917146776015379 0.19130417706991318 0.07593053210477367 0.006370615890870324julia> inner(t1, t2)7.128440740238072
For further in-place and possibly-in-place methods, see VectorInterface.jl
TensorOperations.jl
The TensorOperations.jl interface is also implemented for AbstractBlockTensorMap. In particular, the AbstractBlockTensorMap can be contracted with other AbstractBlockTensorMap objects, as well as with AbstractTensorMap objects. In order for that mix to work, the AbstractTensorMap objects are automatically converted to AbstractBlockTensorMap objects with a single tensor, i.e. the sum spaces will be a sum of one space. As a consequence, as soon as one of the input tensors is blocked, the output tensor will also be blocked, even though its size might be trivial. In these cases, only can be used to retrieve the single element in the BlockTensorMap.
julia> @tensor t3[a; b] := t[a; c d] * conj(t[b; c d])TensorMap(ℂ^3 ← ℂ^3): 3.086971087981131 1.8268168573649786 1.7829762074566 1.8268168573649786 2.0034027232334317 1.7529904740898308 1.7829762074566 1.7529904740898308 1.7925319047322497julia> @tensor t4[a; b] := t[1, :, :][a; c d] * conj(t[1, :, :][b; c d]) # blocktensor * blocktensor = blocktensorERROR: TensorOperations.IndexError{String}("invalid permutation of length 2: ((1,), (2, 3))")julia> t4 isa AbstractBlockTensorMapfalsejulia> only(t4) isa eltype(t4)ERROR: ArgumentError: Collection has multiple elements, must contain exactly 1 elementjulia> @tensor t5[a; b] := t[1][a; c d] * conj(t[1:1, 1:1, 1:1][b; c d]) # tensor * blocktensor = blocktensorERROR: MethodError: no method matching tensorcontract_type(::Type{Float64}, ::Float64, ::Tuple{Tuple{Int64}, Tuple{Int64, Int64}}, ::Bool, ::StridedViews.StridedView{Float64, 3, Memory{Float64}, typeof(identity)}, ::Tuple{Tuple{Int64, Int64}, Tuple{Int64}}, ::Bool, ::Tuple{Tuple{Int64}, Tuple{Int64}}) The function `tensorcontract_type` exists, but no method is defined for this combination of argument types. Closest candidates are: tensorcontract_type(::Any, ::AbstractArray, ::Tuple{NTuple{N₁, Int64}, NTuple{N₂, Int64}} where {N₁, N₂}, ::Bool, ::AbstractArray, ::Tuple{NTuple{N₁, Int64}, NTuple{N₂, Int64}} where {N₁, N₂}, ::Bool, ::Tuple{NTuple{N₁, Int64}, NTuple{N₂, Int64}} where {N₁, N₂}) @ TensorOperations ~/.julia/packages/TensorOperations/35umh/src/implementation/allocator.jl:136 tensorcontract_type(::Any, ::AbstractBlockTensorMap, ::Tuple{NTuple{N₁, Int64}, NTuple{N₂, Int64}} where {N₁, N₂}, ::Bool, ::AbstractBlockTensorMap, ::Tuple{NTuple{N₁, Int64}, NTuple{N₂, Int64}} where {N₁, N₂}, ::Bool, ::Tuple{NTuple{N₁, Int64}, NTuple{N₂, Int64}}) where {N₁, N₂} @ BlockTensorKit ~/work/BlockTensorKit.jl/BlockTensorKit.jl/src/tensors/tensoroperations.jl:36 tensorcontract_type(::Any, ::TensorKit.AbstractTensorMap, ::Tuple{NTuple{N₁, Int64}, NTuple{N₂, Int64}} where {N₁, N₂}, ::Bool, ::AbstractBlockTensorMap, ::Tuple{NTuple{N₁, Int64}, NTuple{N₂, Int64}} where {N₁, N₂}, ::Bool, ::Tuple{NTuple{N₁, Int64}, NTuple{N₂, Int64}}) where {N₁, N₂} @ BlockTensorKit ~/work/BlockTensorKit.jl/BlockTensorKit.jl/src/tensors/tensoroperations.jl:54 ...julia> t5 isa AbstractBlockTensorMapERROR: UndefVarError: `t5` not defined in `Main` Suggestion: check for spelling errors or missing imports.julia> only(t5) isa eltype(t5)ERROR: UndefVarError: `t5` not defined in `Main` Suggestion: check for spelling errors or missing imports.
Factorizations
Currently, there is only rudimentary support for factorizations of AbstractBlockTensorMap objects. In particular, the implementations are not yet optimized for performance, and the factorizations are typically carried out by mapping to a dense tensor, and then performing the factorization on that tensor.