ops

cat(keyedtensors, dim=0)

like torch.cat but for KeyedTensors, concatenates a sequence of KeyedTensor along existing dimension.

Parameters
  • keyedtensors (Sequence[KeyedTensor]) – a sequence of KeyedTensors. should all have the same keys (though they may be differently ordered) and shapes should be alignable just as they would need to be with torch.cat.

  • dim (int) – integer dimension to concatenate on dim should. Defaults to 0.

Example

>>> import torch
>>>
>>> import keyedtensor as kt
>>> from keyedtensor import KeyedTensor
>>>
>>> x1 = KeyedTensor(a=torch.ones(2, 3), b=torch.ones(2))
>>> x2 = KeyedTensor(a=torch.ones(3, 3), b=torch.ones(3)) * 2
>>> x3 = KeyedTensor(b=torch.ones(1), a=torch.ones(1, 3)) * 3
>>>
>>> kt.cat((x1, x2, x3), dim=0).to(torch.int64)
KeyedTensor(a=tensor([[1, 1, 1],
                      [1, 1, 1],
                      [2, 2, 2],
                      [2, 2, 2],
                      [2, 2, 2],
                      [3, 3, 3]]),
            b=tensor([1, 1, 2, 2, 2, 3]))
Return type

KeyedTensor

stack(keyedtensors, dim=0)

like torch.stack but for KeyedTensors, stacks a sequence of KeyedTensor along existing dimension.

Parameters
  • keyedtensors (Sequence[KeyedTensor]) – a sequence of KeyedTensors. should all have the same keys (though they may be differently ordered) and shapes should be the same just as they would need to be with torch.stack.

  • dim (int) – integer dimension to stack on dim should. Defaults to 0.

Example

>>> import torch
>>>
>>> import keyedtensor as kt
>>> from keyedtensor import KeyedTensor
>>>
>>> x1 = KeyedTensor(a=torch.ones(3), b=torch.ones(2))
>>> x2 = KeyedTensor(a=torch.ones(3), b=torch.ones(2)) * 2
>>> x3 = KeyedTensor(b=torch.ones(2), a=torch.ones(3)) * 3
>>>
>>> kt.stack((x1, x2, x3), dim=0).to(torch.int64)
KeyedTensor(a=tensor([[1, 1, 1],
                      [2, 2, 2],
                      [3, 3, 3]]),
            b=tensor([[1, 1],
                      [2, 2],
                      [3, 3]]))
Return type

KeyedTensor