import torch import torch.nn as nn model = nn.Sequential( nn.Linear(10, 5), nn.ReLU(), nn.Linear(5, 2) ) # retrieve named parameters of the model for name, param in model.named_parameters(): print(name, param.shape) # Output: # 0.weight torch.Size([5, 10]) # 0.bias torch.Size([5]) # 2.weight torch.Size([2, 5]) # 2.bias torch.Size([2])In this example, we have created a simple sequential model with two linear layers and a ReLU activation between them. We then retrieve the named parameters of the model using the named_parameters() method and print the names and shapes of each parameter. The torch.nn.parallel package is used to parallelize the computations across multiple GPUs.