gigl.utils.share_memory#
Functions#
|
Based on GraphLearn-for-PyTorch's share_memory implementation, with additional support for handling empty tensors with share_memory. |
Module Contents#
- gigl.utils.share_memory.share_memory(entity)[source]#
- Based on GraphLearn-for-PyTorch’s share_memory implementation, with additional support for handling empty tensors with share_memory.
Calling share_memory_() on an empty tensor may cause processes to hang, although the root cause of this is currently unknown. As a result, we opt to not move empty tensors to shared memory if they are provided.
When calling share_memory on a RangePartitionBook, we don’t need to move the partition bounds to shared memory, since GLT doesn’t natively provide a ForkingPickler registration method for the RangePartitionBook, and the cost of not moving this to shared memory is minimal, since the size of this tensor is very small, being equal in length to the number of machines.
- Parameters:
entity (Optional[Union[torch.Tensor, PartitionBook, Dict[_KeyType, torch.Tensor], Dict[_KeyType, PartitionBook]]]) – Homogeneous or heterogeneous entity of tensors which is being moved to shared memory
- Return type:
None