CerebNet.models.sub_module¶
- class CerebNet.models.sub_module.ClassifierBlock(params)[source]¶
- Classification Block. - Methods - add_module(name, module)- Add a child module to the current module. - apply(fn)- Apply - fnrecursively to every submodule (as returned by- .children()) as well as self.- bfloat16()- Casts all floating point parameters and buffers to - bfloat16datatype.- buffers([recurse])- Return an iterator over module buffers. - children()- Return an iterator over immediate children modules. - compile(*args, **kwargs)- Compile this Module's forward using - torch.compile().- cpu()- Move all model parameters and buffers to the CPU. - cuda([device])- Move all model parameters and buffers to the GPU. - double()- Casts all floating point parameters and buffers to - doubledatatype.- eval()- Set the module in evaluation mode. - extra_repr()- Return the extra representation of the module. - float()- Casts all floating point parameters and buffers to - floatdatatype.- forward(x)- Computational graph of classifier :param tensor x: output of last CompetitiveDenseDecoder Block- :return: logits - get_buffer(target)- Return the buffer given by - targetif it exists, otherwise throw an error.- get_extra_state()- Return any extra state to include in the module's state_dict. - get_parameter(target)- Return the parameter given by - targetif it exists, otherwise throw an error.- get_submodule(target)- Return the submodule given by - targetif it exists, otherwise throw an error.- half()- Casts all floating point parameters and buffers to - halfdatatype.- ipu([device])- Move all model parameters and buffers to the IPU. - load_state_dict(state_dict[, strict, assign])- Copy parameters and buffers from - state_dictinto this module and its descendants.- modules()- Return an iterator over all modules in the network. - mtia([device])- Move all model parameters and buffers to the MTIA. - named_buffers([prefix, recurse, ...])- Return an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself. - named_children()- Return an iterator over immediate children modules, yielding both the name of the module as well as the module itself. - named_modules([memo, prefix, remove_duplicate])- Return an iterator over all modules in the network, yielding both the name of the module as well as the module itself. - named_parameters([prefix, recurse, ...])- Return an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself. - parameters([recurse])- Return an iterator over module parameters. - register_backward_hook(hook)- Register a backward hook on the module. - register_buffer(name, tensor[, persistent])- Add a buffer to the module. - register_forward_hook(hook, *[, prepend, ...])- Register a forward hook on the module. - register_forward_pre_hook(hook, *[, ...])- Register a forward pre-hook on the module. - register_full_backward_hook(hook[, prepend])- Register a backward hook on the module. - register_full_backward_pre_hook(hook[, prepend])- Register a backward pre-hook on the module. - register_load_state_dict_post_hook(hook)- Register a post-hook to be run after module's - load_state_dict()is called.- register_load_state_dict_pre_hook(hook)- Register a pre-hook to be run before module's - load_state_dict()is called.- register_module(name, module)- Alias for - add_module().- register_parameter(name, param)- Add a parameter to the module. - register_state_dict_post_hook(hook)- Register a post-hook for the - state_dict()method.- register_state_dict_pre_hook(hook)- Register a pre-hook for the - state_dict()method.- requires_grad_([requires_grad])- Change if autograd should record operations on parameters in this module. - set_extra_state(state)- Set extra state contained in the loaded - state_dict.- set_submodule(target, module[, strict])- Set the submodule given by - targetif it exists, otherwise throw an error.- share_memory()- See - torch.Tensor.share_memory_().- state_dict(*args[, destination, prefix, ...])- Return a dictionary containing references to the whole state of the module. - to(*args, **kwargs)- Move and/or cast the parameters and buffers. - to_empty(*, device[, recurse])- Move the parameters and buffers to the specified device without copying storage. - train([mode])- Set the module in training mode. - type(dst_type)- Casts all parameters and buffers to - dst_type.- xpu([device])- Move all model parameters and buffers to the XPU. - zero_grad([set_to_none])- Reset gradients of all model parameters. - __call__ 
- class CerebNet.models.sub_module.CompetitiveDecoderBlock(params, outblock=False)[source]¶
- Decoder Block = (Unpooling + Skip Connection) –> Dense Block - Methods - add_module(name, module)- Add a child module to the current module. - apply(fn)- Apply - fnrecursively to every submodule (as returned by- .children()) as well as self.- bfloat16()- Casts all floating point parameters and buffers to - bfloat16datatype.- buffers([recurse])- Return an iterator over module buffers. - children()- Return an iterator over immediate children modules. - compile(*args, **kwargs)- Compile this Module's forward using - torch.compile().- cpu()- Move all model parameters and buffers to the CPU. - cuda([device])- Move all model parameters and buffers to the GPU. - double()- Casts all floating point parameters and buffers to - doubledatatype.- eval()- Set the module in evaluation mode. - extra_repr()- Return the extra representation of the module. - float()- Casts all floating point parameters and buffers to - floatdatatype.- forward(x, out_block, indices)- Computational graph Decoder block: - get_buffer(target)- Return the buffer given by - targetif it exists, otherwise throw an error.- get_extra_state()- Return any extra state to include in the module's state_dict. - get_parameter(target)- Return the parameter given by - targetif it exists, otherwise throw an error.- get_submodule(target)- Return the submodule given by - targetif it exists, otherwise throw an error.- half()- Casts all floating point parameters and buffers to - halfdatatype.- ipu([device])- Move all model parameters and buffers to the IPU. - load_state_dict(state_dict[, strict, assign])- Copy parameters and buffers from - state_dictinto this module and its descendants.- modules()- Return an iterator over all modules in the network. - mtia([device])- Move all model parameters and buffers to the MTIA. - named_buffers([prefix, recurse, ...])- Return an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself. - named_children()- Return an iterator over immediate children modules, yielding both the name of the module as well as the module itself. - named_modules([memo, prefix, remove_duplicate])- Return an iterator over all modules in the network, yielding both the name of the module as well as the module itself. - named_parameters([prefix, recurse, ...])- Return an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself. - parameters([recurse])- Return an iterator over module parameters. - register_backward_hook(hook)- Register a backward hook on the module. - register_buffer(name, tensor[, persistent])- Add a buffer to the module. - register_forward_hook(hook, *[, prepend, ...])- Register a forward hook on the module. - register_forward_pre_hook(hook, *[, ...])- Register a forward pre-hook on the module. - register_full_backward_hook(hook[, prepend])- Register a backward hook on the module. - register_full_backward_pre_hook(hook[, prepend])- Register a backward pre-hook on the module. - register_load_state_dict_post_hook(hook)- Register a post-hook to be run after module's - load_state_dict()is called.- register_load_state_dict_pre_hook(hook)- Register a pre-hook to be run before module's - load_state_dict()is called.- register_module(name, module)- Alias for - add_module().- register_parameter(name, param)- Add a parameter to the module. - register_state_dict_post_hook(hook)- Register a post-hook for the - state_dict()method.- register_state_dict_pre_hook(hook)- Register a pre-hook for the - state_dict()method.- requires_grad_([requires_grad])- Change if autograd should record operations on parameters in this module. - set_extra_state(state)- Set extra state contained in the loaded - state_dict.- set_submodule(target, module[, strict])- Set the submodule given by - targetif it exists, otherwise throw an error.- share_memory()- See - torch.Tensor.share_memory_().- state_dict(*args[, destination, prefix, ...])- Return a dictionary containing references to the whole state of the module. - to(*args, **kwargs)- Move and/or cast the parameters and buffers. - to_empty(*, device[, recurse])- Move the parameters and buffers to the specified device without copying storage. - train([mode])- Set the module in training mode. - type(dst_type)- Casts all parameters and buffers to - dst_type.- xpu([device])- Move all model parameters and buffers to the XPU. - zero_grad([set_to_none])- Reset gradients of all model parameters. - __call__ - forward(x, out_block, indices)[source]¶
- Computational graph Decoder block:
- Unpooling of feature maps from lower block 
- Maxout combination of unpooled map + skip connection 
- Forwarding toward CompetitiveDenseBlock 
 
 - Parameters:
- xtensor
- Input feature map from lower block (gets unpooled and maxed with out_block). 
- out_blocktensor
- Skip connection feature map from the corresponding Encoder. 
- indicestensor
- Indices for unpooling from the corresponding Encoder (maxpool op). 
 
- x
- Returns:
- out_block
- Processed feature maps. 
 
 
 
- class CerebNet.models.sub_module.CompetitiveDenseBlock(params, outblock=False, discriminator_block=False)[source]¶
- Define a competitive dense block. - A dense block consists of 3 convolutional layers, with BN/ReLU. - Parameters:
- params = {‘num_channels’1,
- ‘num_filters’ : 64, ‘kernel_h’ : 5, ‘kernel_w’ : 5, ‘stride_conv’ : 1, ‘pool’ : 2, ‘stride_pool’ : 2, ‘num_classes’ : 44, ‘kernel_c’ : 1, ‘input’ : True }. 
 
 - Methods - add_module(name, module)- Add a child module to the current module. - apply(fn)- Apply - fnrecursively to every submodule (as returned by- .children()) as well as self.- bfloat16()- Casts all floating point parameters and buffers to - bfloat16datatype.- buffers([recurse])- Return an iterator over module buffers. - children()- Return an iterator over immediate children modules. - compile(*args, **kwargs)- Compile this Module's forward using - torch.compile().- cpu()- Move all model parameters and buffers to the CPU. - cuda([device])- Move all model parameters and buffers to the GPU. - double()- Casts all floating point parameters and buffers to - doubledatatype.- eval()- Set the module in evaluation mode. - extra_repr()- Return the extra representation of the module. - float()- Casts all floating point parameters and buffers to - floatdatatype.- forward(x)- CompetitiveDenseBlock's computational Graph {in (Conv - BN from prev. - get_buffer(target)- Return the buffer given by - targetif it exists, otherwise throw an error.- get_extra_state()- Return any extra state to include in the module's state_dict. - get_parameter(target)- Return the parameter given by - targetif it exists, otherwise throw an error.- get_submodule(target)- Return the submodule given by - targetif it exists, otherwise throw an error.- half()- Casts all floating point parameters and buffers to - halfdatatype.- ipu([device])- Move all model parameters and buffers to the IPU. - load_state_dict(state_dict[, strict, assign])- Copy parameters and buffers from - state_dictinto this module and its descendants.- modules()- Return an iterator over all modules in the network. - mtia([device])- Move all model parameters and buffers to the MTIA. - named_buffers([prefix, recurse, ...])- Return an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself. - named_children()- Return an iterator over immediate children modules, yielding both the name of the module as well as the module itself. - named_modules([memo, prefix, remove_duplicate])- Return an iterator over all modules in the network, yielding both the name of the module as well as the module itself. - named_parameters([prefix, recurse, ...])- Return an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself. - parameters([recurse])- Return an iterator over module parameters. - register_backward_hook(hook)- Register a backward hook on the module. - register_buffer(name, tensor[, persistent])- Add a buffer to the module. - register_forward_hook(hook, *[, prepend, ...])- Register a forward hook on the module. - register_forward_pre_hook(hook, *[, ...])- Register a forward pre-hook on the module. - register_full_backward_hook(hook[, prepend])- Register a backward hook on the module. - register_full_backward_pre_hook(hook[, prepend])- Register a backward pre-hook on the module. - register_load_state_dict_post_hook(hook)- Register a post-hook to be run after module's - load_state_dict()is called.- register_load_state_dict_pre_hook(hook)- Register a pre-hook to be run before module's - load_state_dict()is called.- register_module(name, module)- Alias for - add_module().- register_parameter(name, param)- Add a parameter to the module. - register_state_dict_post_hook(hook)- Register a post-hook for the - state_dict()method.- register_state_dict_pre_hook(hook)- Register a pre-hook for the - state_dict()method.- requires_grad_([requires_grad])- Change if autograd should record operations on parameters in this module. - set_extra_state(state)- Set extra state contained in the loaded - state_dict.- set_submodule(target, module[, strict])- Set the submodule given by - targetif it exists, otherwise throw an error.- share_memory()- See - torch.Tensor.share_memory_().- state_dict(*args[, destination, prefix, ...])- Return a dictionary containing references to the whole state of the module. - to(*args, **kwargs)- Move and/or cast the parameters and buffers. - to_empty(*, device[, recurse])- Move the parameters and buffers to the specified device without copying storage. - train([mode])- Set the module in training mode. - type(dst_type)- Casts all parameters and buffers to - dst_type.- xpu([device])- Move all model parameters and buffers to the XPU. - zero_grad([set_to_none])- Reset gradients of all model parameters. - __call__ - forward(x)[source]¶
- CompetitiveDenseBlock’s computational Graph {in (Conv - BN from prev. block) -> PReLU} -> {Conv -> BN -> Maxout -> PReLU} x 2 -> {Conv -> BN} -> out end with batch-normed output to allow maxout across skip-connections. - Parameters:
- xtensor
- Input tensor (image or feature map). 
 
- x
- Returns:
- tensor
- Output tensor (processed feature map). 
 
 
 
- class CerebNet.models.sub_module.CompetitiveDenseBlockInput(params)[source]¶
- Function to define a competitive dense block comprising of 3 convolutional layers, with BN/ReLU for input. - Parameters:
- params = {‘num_channels’1,
- ‘num_filters’ : 64, ‘kernel_h’ : 5, ‘kernel_w’ : 5, ‘stride_conv’ : 1, ‘pool’ : 2, ‘stride_pool’ : 2, ‘num_classes’ : 44, ‘kernel_c’ : 1, ‘input’ : True }. 
 
 - Methods - add_module(name, module)- Add a child module to the current module. - apply(fn)- Apply - fnrecursively to every submodule (as returned by- .children()) as well as self.- bfloat16()- Casts all floating point parameters and buffers to - bfloat16datatype.- buffers([recurse])- Return an iterator over module buffers. - children()- Return an iterator over immediate children modules. - compile(*args, **kwargs)- Compile this Module's forward using - torch.compile().- cpu()- Move all model parameters and buffers to the CPU. - cuda([device])- Move all model parameters and buffers to the GPU. - double()- Casts all floating point parameters and buffers to - doubledatatype.- eval()- Set the module in evaluation mode. - extra_repr()- Return the extra representation of the module. - float()- Casts all floating point parameters and buffers to - floatdatatype.- forward(x)- CompetitiveDenseBlockInput's computational Graph in -> BN -> {Conv -> BN -> PReLU} -> {Conv -> BN -> Maxout -> PReLU} -> {Conv -> BN} -> out. - get_buffer(target)- Return the buffer given by - targetif it exists, otherwise throw an error.- get_extra_state()- Return any extra state to include in the module's state_dict. - get_parameter(target)- Return the parameter given by - targetif it exists, otherwise throw an error.- get_submodule(target)- Return the submodule given by - targetif it exists, otherwise throw an error.- half()- Casts all floating point parameters and buffers to - halfdatatype.- ipu([device])- Move all model parameters and buffers to the IPU. - load_state_dict(state_dict[, strict, assign])- Copy parameters and buffers from - state_dictinto this module and its descendants.- modules()- Return an iterator over all modules in the network. - mtia([device])- Move all model parameters and buffers to the MTIA. - named_buffers([prefix, recurse, ...])- Return an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself. - named_children()- Return an iterator over immediate children modules, yielding both the name of the module as well as the module itself. - named_modules([memo, prefix, remove_duplicate])- Return an iterator over all modules in the network, yielding both the name of the module as well as the module itself. - named_parameters([prefix, recurse, ...])- Return an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself. - parameters([recurse])- Return an iterator over module parameters. - register_backward_hook(hook)- Register a backward hook on the module. - register_buffer(name, tensor[, persistent])- Add a buffer to the module. - register_forward_hook(hook, *[, prepend, ...])- Register a forward hook on the module. - register_forward_pre_hook(hook, *[, ...])- Register a forward pre-hook on the module. - register_full_backward_hook(hook[, prepend])- Register a backward hook on the module. - register_full_backward_pre_hook(hook[, prepend])- Register a backward pre-hook on the module. - register_load_state_dict_post_hook(hook)- Register a post-hook to be run after module's - load_state_dict()is called.- register_load_state_dict_pre_hook(hook)- Register a pre-hook to be run before module's - load_state_dict()is called.- register_module(name, module)- Alias for - add_module().- register_parameter(name, param)- Add a parameter to the module. - register_state_dict_post_hook(hook)- Register a post-hook for the - state_dict()method.- register_state_dict_pre_hook(hook)- Register a pre-hook for the - state_dict()method.- requires_grad_([requires_grad])- Change if autograd should record operations on parameters in this module. - set_extra_state(state)- Set extra state contained in the loaded - state_dict.- set_submodule(target, module[, strict])- Set the submodule given by - targetif it exists, otherwise throw an error.- share_memory()- See - torch.Tensor.share_memory_().- state_dict(*args[, destination, prefix, ...])- Return a dictionary containing references to the whole state of the module. - to(*args, **kwargs)- Move and/or cast the parameters and buffers. - to_empty(*, device[, recurse])- Move the parameters and buffers to the specified device without copying storage. - train([mode])- Set the module in training mode. - type(dst_type)- Casts all parameters and buffers to - dst_type.- xpu([device])- Move all model parameters and buffers to the XPU. - zero_grad([set_to_none])- Reset gradients of all model parameters. - __call__ 
- class CerebNet.models.sub_module.CompetitiveEncoderBlock(params)[source]¶
- Encoder Block = CompetitiveDenseBlock + Max Pooling. - Methods - add_module(name, module)- Add a child module to the current module. - apply(fn)- Apply - fnrecursively to every submodule (as returned by- .children()) as well as self.- bfloat16()- Casts all floating point parameters and buffers to - bfloat16datatype.- buffers([recurse])- Return an iterator over module buffers. - children()- Return an iterator over immediate children modules. - compile(*args, **kwargs)- Compile this Module's forward using - torch.compile().- cpu()- Move all model parameters and buffers to the CPU. - cuda([device])- Move all model parameters and buffers to the GPU. - double()- Casts all floating point parameters and buffers to - doubledatatype.- eval()- Set the module in evaluation mode. - extra_repr()- Return the extra representation of the module. - float()- Casts all floating point parameters and buffers to - floatdatatype.- forward(x)- CComputational graph for Encoder Block : - get_buffer(target)- Return the buffer given by - targetif it exists, otherwise throw an error.- get_extra_state()- Return any extra state to include in the module's state_dict. - get_parameter(target)- Return the parameter given by - targetif it exists, otherwise throw an error.- get_submodule(target)- Return the submodule given by - targetif it exists, otherwise throw an error.- half()- Casts all floating point parameters and buffers to - halfdatatype.- ipu([device])- Move all model parameters and buffers to the IPU. - load_state_dict(state_dict[, strict, assign])- Copy parameters and buffers from - state_dictinto this module and its descendants.- modules()- Return an iterator over all modules in the network. - mtia([device])- Move all model parameters and buffers to the MTIA. - named_buffers([prefix, recurse, ...])- Return an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself. - named_children()- Return an iterator over immediate children modules, yielding both the name of the module as well as the module itself. - named_modules([memo, prefix, remove_duplicate])- Return an iterator over all modules in the network, yielding both the name of the module as well as the module itself. - named_parameters([prefix, recurse, ...])- Return an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself. - parameters([recurse])- Return an iterator over module parameters. - register_backward_hook(hook)- Register a backward hook on the module. - register_buffer(name, tensor[, persistent])- Add a buffer to the module. - register_forward_hook(hook, *[, prepend, ...])- Register a forward hook on the module. - register_forward_pre_hook(hook, *[, ...])- Register a forward pre-hook on the module. - register_full_backward_hook(hook[, prepend])- Register a backward hook on the module. - register_full_backward_pre_hook(hook[, prepend])- Register a backward pre-hook on the module. - register_load_state_dict_post_hook(hook)- Register a post-hook to be run after module's - load_state_dict()is called.- register_load_state_dict_pre_hook(hook)- Register a pre-hook to be run before module's - load_state_dict()is called.- register_module(name, module)- Alias for - add_module().- register_parameter(name, param)- Add a parameter to the module. - register_state_dict_post_hook(hook)- Register a post-hook for the - state_dict()method.- register_state_dict_pre_hook(hook)- Register a pre-hook for the - state_dict()method.- requires_grad_([requires_grad])- Change if autograd should record operations on parameters in this module. - set_extra_state(state)- Set extra state contained in the loaded - state_dict.- set_submodule(target, module[, strict])- Set the submodule given by - targetif it exists, otherwise throw an error.- share_memory()- See - torch.Tensor.share_memory_().- state_dict(*args[, destination, prefix, ...])- Return a dictionary containing references to the whole state of the module. - to(*args, **kwargs)- Move and/or cast the parameters and buffers. - to_empty(*, device[, recurse])- Move the parameters and buffers to the specified device without copying storage. - train([mode])- Set the module in training mode. - type(dst_type)- Casts all parameters and buffers to - dst_type.- xpu([device])- Move all model parameters and buffers to the XPU. - zero_grad([set_to_none])- Reset gradients of all model parameters. - __call__ 
- class CerebNet.models.sub_module.CompetitiveEncoderBlockInput(params)[source]¶
- Encoder Block = CompetitiveDenseBlockInput + Max Pooling. - Methods - add_module(name, module)- Add a child module to the current module. - apply(fn)- Apply - fnrecursively to every submodule (as returned by- .children()) as well as self.- bfloat16()- Casts all floating point parameters and buffers to - bfloat16datatype.- buffers([recurse])- Return an iterator over module buffers. - children()- Return an iterator over immediate children modules. - compile(*args, **kwargs)- Compile this Module's forward using - torch.compile().- cpu()- Move all model parameters and buffers to the CPU. - cuda([device])- Move all model parameters and buffers to the GPU. - double()- Casts all floating point parameters and buffers to - doubledatatype.- eval()- Set the module in evaluation mode. - extra_repr()- Return the extra representation of the module. - float()- Casts all floating point parameters and buffers to - floatdatatype.- forward(x)- Computational graph for Encoder Block: - get_buffer(target)- Return the buffer given by - targetif it exists, otherwise throw an error.- get_extra_state()- Return any extra state to include in the module's state_dict. - get_parameter(target)- Return the parameter given by - targetif it exists, otherwise throw an error.- get_submodule(target)- Return the submodule given by - targetif it exists, otherwise throw an error.- half()- Casts all floating point parameters and buffers to - halfdatatype.- ipu([device])- Move all model parameters and buffers to the IPU. - load_state_dict(state_dict[, strict, assign])- Copy parameters and buffers from - state_dictinto this module and its descendants.- modules()- Return an iterator over all modules in the network. - mtia([device])- Move all model parameters and buffers to the MTIA. - named_buffers([prefix, recurse, ...])- Return an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself. - named_children()- Return an iterator over immediate children modules, yielding both the name of the module as well as the module itself. - named_modules([memo, prefix, remove_duplicate])- Return an iterator over all modules in the network, yielding both the name of the module as well as the module itself. - named_parameters([prefix, recurse, ...])- Return an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself. - parameters([recurse])- Return an iterator over module parameters. - register_backward_hook(hook)- Register a backward hook on the module. - register_buffer(name, tensor[, persistent])- Add a buffer to the module. - register_forward_hook(hook, *[, prepend, ...])- Register a forward hook on the module. - register_forward_pre_hook(hook, *[, ...])- Register a forward pre-hook on the module. - register_full_backward_hook(hook[, prepend])- Register a backward hook on the module. - register_full_backward_pre_hook(hook[, prepend])- Register a backward pre-hook on the module. - register_load_state_dict_post_hook(hook)- Register a post-hook to be run after module's - load_state_dict()is called.- register_load_state_dict_pre_hook(hook)- Register a pre-hook to be run before module's - load_state_dict()is called.- register_module(name, module)- Alias for - add_module().- register_parameter(name, param)- Add a parameter to the module. - register_state_dict_post_hook(hook)- Register a post-hook for the - state_dict()method.- register_state_dict_pre_hook(hook)- Register a pre-hook for the - state_dict()method.- requires_grad_([requires_grad])- Change if autograd should record operations on parameters in this module. - set_extra_state(state)- Set extra state contained in the loaded - state_dict.- set_submodule(target, module[, strict])- Set the submodule given by - targetif it exists, otherwise throw an error.- share_memory()- See - torch.Tensor.share_memory_().- state_dict(*args[, destination, prefix, ...])- Return a dictionary containing references to the whole state of the module. - to(*args, **kwargs)- Move and/or cast the parameters and buffers. - to_empty(*, device[, recurse])- Move the parameters and buffers to the specified device without copying storage. - train([mode])- Set the module in training mode. - type(dst_type)- Casts all parameters and buffers to - dst_type.- xpu([device])- Move all model parameters and buffers to the XPU. - zero_grad([set_to_none])- Reset gradients of all model parameters. - __call__ - forward(x)[source]¶
- Computational graph for Encoder Block:
- CompetitiveDenseBlockInput 
- Max Pooling (+ retain indices) 
 
 - Parameters:
- xtensor
- Feature map from previous block. 
 
- x
- Returns:
- Tensor
- The original feature map as received by the block. 
- Tensor
- The maxpooled feature map after applying max pooling to the original feature map. 
- Tensor
- The indices of the maxpool operation.