RLPack
 
Loading...
Searching...
No Matches
C_GradAccumulator Class Reference

#include <C_GradAccumulator.h>

Public Member Functions

void accumulate (std::map< std::string, torch::Tensor > &namedParameters)
 
 C_GradAccumulator (std::vector< std::string > &parameterKeys, int64_t boostStrapRounds)
 
void clear ()
 
void delete_item (int64_t index)
 
std::map< std::string, torch::Tensor > get_item (int64_t index)
 
std::map< std::string, torch::Tensor > mean_reduce ()
 
void set_item (int64_t index, std::map< std::string, torch::Tensor > &namedParameters)
 
size_t size ()
 
std::map< std::string, torch::Tensor > sum_reduce ()
 
 ~C_GradAccumulator ()
 

Private Attributes

int64_t bootstrapRounds_
 The number of boostrap rounds over which accumulation and reduction is to take place. More...
 
std::vector< std::map< std::string, torch::Tensor > > namedParametersGrads_
 The vector to accumulate the gradients. More...
 
std::vector< std::string > parameterKeys_
 The parameter keys for the model for which gradient accumulation is being done. More...
 
std::map< std::string, torch::Tensor > reducedParams_
 The map to store final results of reduced parameters. More...
 

Constructor & Destructor Documentation

◆ C_GradAccumulator()

C_GradAccumulator::C_GradAccumulator ( std::vector< std::string > &  parameterKeys,
int64_t  boostStrapRounds 
)

Class constructor for C_GradAccumulator. This class reserves the memory for namedParametersGrads_ for gradient accumulation. This is C++ backend equivalent to rlpack._C.grad_accumulator.GradAccumulator.__init__.

Parameters
parameterKeys: parameter keys for the model for which gradient accumulation is being done.
boostStrapRounds: The number of boostrap rounds over which accumulation and reduction is to take place.

◆ ~C_GradAccumulator()

C_GradAccumulator::~C_GradAccumulator ( )
default

Default constructor C_GradAccumulator

Member Function Documentation

◆ accumulate()

void C_GradAccumulator::accumulate ( std::map< std::string, torch::Tensor > &  namedParameters)

This method accumulates the gradient from the given named parameters. This method will throw error if you attempt to accumulate more gradients than bootstrapRounds passed in class constructor. This is C++ backend equivalent to rlpack._C.grad_accumulator.GradAccumulator.accumulate.

Parameters
namedParameters: Map of named parameters.

◆ clear()

void C_GradAccumulator::clear ( )

Clears all the accumulated gradients. This is C++ backend equivalent to rlpack._C.grad_accumulator.GradAccumulator.clear.

◆ delete_item()

void C_GradAccumulator::delete_item ( int64_t  index)

Method to delete the named parameter gradients in the given index.

Parameters
index: The index at which we wish to obtain the gradient values.

◆ get_item()

std::map< std::string, torch::Tensor > C_GradAccumulator::get_item ( int64_t  index)

Method to get the named parameter gradients in the given index.

Parameters
index: The index at which we wish to obtain the gradient values.
Returns
: The map of parameter keys and values.

◆ mean_reduce()

std::map< std::string, torch::Tensor > C_GradAccumulator::mean_reduce ( )

Performs mean reduction of accumulated gradients. This is C++ backend equivalent to rlpack._C.grad_accumulator.GradAccumulator.mean_reduce.

◆ set_item()

void C_GradAccumulator::set_item ( int64_t  index,
std::map< std::string, torch::Tensor > &  namedParameters 
)

Method to set the named parameter gradients in the given index.

Parameters
index: The index at which we wish to obtain the gradient values.
namedParameters: The item we wish to set at the given index.

◆ size()

size_t C_GradAccumulator::size ( )

◆ sum_reduce()

std::map< std::string, torch::Tensor > C_GradAccumulator::sum_reduce ( )

Performs sum reduction of accumulated gradients. This is C++ backend equivalent to rlpack._C.grad_accumulator.GradAccumulator.mean_reduce.

Field Documentation

◆ bootstrapRounds_

int64_t C_GradAccumulator::bootstrapRounds_
private

The number of boostrap rounds over which accumulation and reduction is to take place.

◆ namedParametersGrads_

std::vector<std::map<std::string, torch::Tensor> > C_GradAccumulator::namedParametersGrads_
private

The vector to accumulate the gradients.

◆ parameterKeys_

std::vector<std::string> C_GradAccumulator::parameterKeys_
private

The parameter keys for the model for which gradient accumulation is being done.

◆ reducedParams_

std::map<std::string, torch::Tensor> C_GradAccumulator::reducedParams_
private

The map to store final results of reduced parameters.