Policy

class mushroom_rl.policy.policy.Policy[source]

Bases: object

Interface representing a generic policy. A policy is a probability distribution that gives the probability of taking an action given a specified state. A policy is used by mushroom agents to interact with the environment.

__call__(*args)[source]

Compute the probability of taking action in a certain state following the policy.

Parameters:*args (list) – list containing a state or a state and an action.
Returns:The probability of all actions following the policy in the given state if the list contains only the state, else the probability of the given action in the given state following the policy. If the action space is continuous, state and action must be provided
draw_action(state)[source]

Sample an action in state using the policy.

Parameters:state (np.ndarray) – the state where the agent is.
Returns:The action sampled from the policy.
reset()[source]

Useful when the policy needs a special initialization at the beginning of an episode.

__init__

Initialize self. See help(type(self)) for accurate signature.

class mushroom_rl.policy.policy.ParametricPolicy[source]

Bases: mushroom_rl.policy.policy.Policy

Interface for a generic parametric policy. A parametric policy is a policy that depends on set of parameters, called the policy weights. If the policy is differentiable, the derivative of the probability for a specified state-action pair can be provided.

diff_log(state, action)[source]

Compute the gradient of the logarithm of the probability density function, in the specified state and action pair, i.e.:

\[\nabla_{\theta}\log p(s,a)\]
Parameters:
  • state (np.ndarray) – the state where the gradient is computed
  • action (np.ndarray) – the action where the gradient is computed
Returns:

The gradient of the logarithm of the pdf w.r.t. the policy weights

diff(state, action)[source]

Compute the derivative of the probability density function, in the specified state and action pair. Normally it is computed w.r.t. the derivative of the logarithm of the probability density function, exploiting the likelihood ratio trick, i.e.:

\[\nabla_{\theta}p(s,a)=p(s,a)\nabla_{\theta}\log p(s,a)\]
Parameters:
  • state (np.ndarray) – the state where the derivative is computed
  • action (np.ndarray) – the action where the derivative is computed
Returns:

The derivative w.r.t. the policy weights

set_weights(weights)[source]

Setter.

Parameters:weights (np.ndarray) – the vector of the new weights to be used by the policy.
get_weights()[source]

Getter.

Returns:The current policy weights.
weights_size

Property.

Returns:The size of the policy weights.
__call__(*args)

Compute the probability of taking action in a certain state following the policy.

Parameters:*args (list) – list containing a state or a state and an action.
Returns:The probability of all actions following the policy in the given state if the list contains only the state, else the probability of the given action in the given state following the policy. If the action space is continuous, state and action must be provided
__init__

Initialize self. See help(type(self)) for accurate signature.

draw_action(state)

Sample an action in state using the policy.

Parameters:state (np.ndarray) – the state where the agent is.
Returns:The action sampled from the policy.
reset()

Useful when the policy needs a special initialization at the beginning of an episode.

Deterministic policy

class mushroom_rl.policy.deterministic_policy.DeterministicPolicy(mu)[source]

Bases: mushroom_rl.policy.policy.ParametricPolicy

Simple parametric policy representing a deterministic policy. As deterministic policies are degenerate probability functions where all the probability mass is on the deterministic action,they are not differentiable, even if the mean value approximator is differentiable.

__init__(mu)[source]

Constructor.

Parameters:mu (Regressor) – the regressor representing the action to select in each state.
get_regressor()[source]

Getter.

Returns:the regressor that is used to map state to actions.
__call__(state, action)[source]

Compute the probability of taking action in a certain state following the policy.

Parameters:*args (list) – list containing a state or a state and an action.
Returns:The probability of all actions following the policy in the given state if the list contains only the state, else the probability of the given action in the given state following the policy. If the action space is continuous, state and action must be provided
draw_action(state)[source]

Sample an action in state using the policy.

Parameters:state (np.ndarray) – the state where the agent is.
Returns:The action sampled from the policy.
set_weights(weights)[source]

Setter.

Parameters:weights (np.ndarray) – the vector of the new weights to be used by the policy.
get_weights()[source]

Getter.

Returns:The current policy weights.
weights_size

Property.

Returns:The size of the policy weights.
diff(state, action)

Compute the derivative of the probability density function, in the specified state and action pair. Normally it is computed w.r.t. the derivative of the logarithm of the probability density function, exploiting the likelihood ratio trick, i.e.:

\[\nabla_{\theta}p(s,a)=p(s,a)\nabla_{\theta}\log p(s,a)\]
Parameters:
  • state (np.ndarray) – the state where the derivative is computed
  • action (np.ndarray) – the action where the derivative is computed
Returns:

The derivative w.r.t. the policy weights

diff_log(state, action)

Compute the gradient of the logarithm of the probability density function, in the specified state and action pair, i.e.:

\[\nabla_{\theta}\log p(s,a)\]
Parameters:
  • state (np.ndarray) – the state where the gradient is computed
  • action (np.ndarray) – the action where the gradient is computed
Returns:

The gradient of the logarithm of the pdf w.r.t. the policy weights

reset()

Useful when the policy needs a special initialization at the beginning of an episode.

Gaussian policy

class mushroom_rl.policy.gaussian_policy.AbstractGaussianPolicy[source]

Bases: mushroom_rl.policy.policy.ParametricPolicy

Abstract class of Gaussian policies.

__call__(state, action)[source]

Compute the probability of taking action in a certain state following the policy.

Parameters:*args (list) – list containing a state or a state and an action.
Returns:The probability of all actions following the policy in the given state if the list contains only the state, else the probability of the given action in the given state following the policy. If the action space is continuous, state and action must be provided
draw_action(state)[source]

Sample an action in state using the policy.

Parameters:state (np.ndarray) – the state where the agent is.
Returns:The action sampled from the policy.
__init__

Initialize self. See help(type(self)) for accurate signature.

diff(state, action)

Compute the derivative of the probability density function, in the specified state and action pair. Normally it is computed w.r.t. the derivative of the logarithm of the probability density function, exploiting the likelihood ratio trick, i.e.:

\[\nabla_{\theta}p(s,a)=p(s,a)\nabla_{\theta}\log p(s,a)\]
Parameters:
  • state (np.ndarray) – the state where the derivative is computed
  • action (np.ndarray) – the action where the derivative is computed
Returns:

The derivative w.r.t. the policy weights

diff_log(state, action)

Compute the gradient of the logarithm of the probability density function, in the specified state and action pair, i.e.:

\[\nabla_{\theta}\log p(s,a)\]
Parameters:
  • state (np.ndarray) – the state where the gradient is computed
  • action (np.ndarray) – the action where the gradient is computed
Returns:

The gradient of the logarithm of the pdf w.r.t. the policy weights

get_weights()

Getter.

Returns:The current policy weights.
reset()

Useful when the policy needs a special initialization at the beginning of an episode.

set_weights(weights)

Setter.

Parameters:weights (np.ndarray) – the vector of the new weights to be used by the policy.
weights_size

Property.

Returns:The size of the policy weights.
class mushroom_rl.policy.gaussian_policy.GaussianPolicy(mu, sigma)[source]

Bases: mushroom_rl.policy.gaussian_policy.AbstractGaussianPolicy

Gaussian policy. This is a differentiable policy for continuous action spaces. The policy samples an action in every state following a gaussian distribution, where the mean is computed in the state and the covariance matrix is fixed.

__init__(mu, sigma)[source]

Constructor.

Parameters:
  • mu (Regressor) – the regressor representing the mean w.r.t. the state;
  • sigma (np.ndarray) – a square positive definite matrix representing the covariance matrix. The size of this matrix must be n x n, where n is the action dimensionality.
set_sigma(sigma)[source]

Setter.

Parameters:sigma (np.ndarray) – the new covariance matrix. Must be a square positive definite matrix.
diff_log(state, action)[source]

Compute the gradient of the logarithm of the probability density function, in the specified state and action pair, i.e.:

\[\nabla_{\theta}\log p(s,a)\]
Parameters:
  • state (np.ndarray) – the state where the gradient is computed
  • action (np.ndarray) – the action where the gradient is computed
Returns:

The gradient of the logarithm of the pdf w.r.t. the policy weights

set_weights(weights)[source]

Setter.

Parameters:weights (np.ndarray) – the vector of the new weights to be used by the policy.
get_weights()[source]

Getter.

Returns:The current policy weights.
weights_size

Property.

Returns:The size of the policy weights.
__call__(state, action)

Compute the probability of taking action in a certain state following the policy.

Parameters:*args (list) – list containing a state or a state and an action.
Returns:The probability of all actions following the policy in the given state if the list contains only the state, else the probability of the given action in the given state following the policy. If the action space is continuous, state and action must be provided
diff(state, action)

Compute the derivative of the probability density function, in the specified state and action pair. Normally it is computed w.r.t. the derivative of the logarithm of the probability density function, exploiting the likelihood ratio trick, i.e.:

\[\nabla_{\theta}p(s,a)=p(s,a)\nabla_{\theta}\log p(s,a)\]
Parameters:
  • state (np.ndarray) – the state where the derivative is computed
  • action (np.ndarray) – the action where the derivative is computed
Returns:

The derivative w.r.t. the policy weights

draw_action(state)

Sample an action in state using the policy.

Parameters:state (np.ndarray) – the state where the agent is.
Returns:The action sampled from the policy.
reset()

Useful when the policy needs a special initialization at the beginning of an episode.

class mushroom_rl.policy.gaussian_policy.DiagonalGaussianPolicy(mu, std)[source]

Bases: mushroom_rl.policy.gaussian_policy.AbstractGaussianPolicy

Gaussian policy with learnable standard deviation. The Covariance matrix is constrained to be a diagonal matrix, where the diagonal is the squared standard deviation vector. This is a differentiable policy for continuous action spaces. This policy is similar to the gaussian policy, but the weights includes also the standard deviation.

__init__(mu, std)[source]

Constructor.

Parameters:
  • mu (Regressor) – the regressor representing the mean w.r.t. the state;
  • std (np.ndarray) – a vector of standard deviations. The length of this vector must be equal to the action dimensionality.
set_std(std)[source]

Setter.

Parameters:std (np.ndarray) – the new standard deviation. Must be a square positive definite matrix.
diff_log(state, action)[source]

Compute the gradient of the logarithm of the probability density function, in the specified state and action pair, i.e.:

\[\nabla_{\theta}\log p(s,a)\]
Parameters:
  • state (np.ndarray) – the state where the gradient is computed
  • action (np.ndarray) – the action where the gradient is computed
Returns:

The gradient of the logarithm of the pdf w.r.t. the policy weights

set_weights(weights)[source]

Setter.

Parameters:weights (np.ndarray) – the vector of the new weights to be used by the policy.
get_weights()[source]

Getter.

Returns:The current policy weights.
weights_size

Property.

Returns:The size of the policy weights.
__call__(state, action)

Compute the probability of taking action in a certain state following the policy.

Parameters:*args (list) – list containing a state or a state and an action.
Returns:The probability of all actions following the policy in the given state if the list contains only the state, else the probability of the given action in the given state following the policy. If the action space is continuous, state and action must be provided
diff(state, action)

Compute the derivative of the probability density function, in the specified state and action pair. Normally it is computed w.r.t. the derivative of the logarithm of the probability density function, exploiting the likelihood ratio trick, i.e.:

\[\nabla_{\theta}p(s,a)=p(s,a)\nabla_{\theta}\log p(s,a)\]
Parameters:
  • state (np.ndarray) – the state where the derivative is computed
  • action (np.ndarray) – the action where the derivative is computed
Returns:

The derivative w.r.t. the policy weights

draw_action(state)

Sample an action in state using the policy.

Parameters:state (np.ndarray) – the state where the agent is.
Returns:The action sampled from the policy.
reset()

Useful when the policy needs a special initialization at the beginning of an episode.

class mushroom_rl.policy.gaussian_policy.StateStdGaussianPolicy(mu, std, eps=1e-06)[source]

Bases: mushroom_rl.policy.gaussian_policy.AbstractGaussianPolicy

Gaussian policy with learnable standard deviation. The Covariance matrix is constrained to be a diagonal matrix, where the diagonal is the squared standard deviation, which is computed for each state. This is a differentiable policy for continuous action spaces. This policy is similar to the diagonal gaussian policy, but a parametric regressor is used to compute the standard deviation, so the standard deviation depends on the current state.

__init__(mu, std, eps=1e-06)[source]

Constructor.

Parameters:
  • mu (Regressor) – the regressor representing the mean w.r.t. the state;
  • std (Regressor) – the regressor representing the standard deviations w.r.t. the state. The output dimensionality of the regressor must be equal to the action dimensionality;
  • eps (float, 1e-6) – A positive constant added to the variance to ensure that is always greater than zero.
diff_log(state, action)[source]

Compute the gradient of the logarithm of the probability density function, in the specified state and action pair, i.e.:

\[\nabla_{\theta}\log p(s,a)\]
Parameters:
  • state (np.ndarray) – the state where the gradient is computed
  • action (np.ndarray) – the action where the gradient is computed
Returns:

The gradient of the logarithm of the pdf w.r.t. the policy weights

set_weights(weights)[source]

Setter.

Parameters:weights (np.ndarray) – the vector of the new weights to be used by the policy.
get_weights()[source]

Getter.

Returns:The current policy weights.
weights_size

Property.

Returns:The size of the policy weights.
__call__(state, action)

Compute the probability of taking action in a certain state following the policy.

Parameters:*args (list) – list containing a state or a state and an action.
Returns:The probability of all actions following the policy in the given state if the list contains only the state, else the probability of the given action in the given state following the policy. If the action space is continuous, state and action must be provided
diff(state, action)

Compute the derivative of the probability density function, in the specified state and action pair. Normally it is computed w.r.t. the derivative of the logarithm of the probability density function, exploiting the likelihood ratio trick, i.e.:

\[\nabla_{\theta}p(s,a)=p(s,a)\nabla_{\theta}\log p(s,a)\]
Parameters:
  • state (np.ndarray) – the state where the derivative is computed
  • action (np.ndarray) – the action where the derivative is computed
Returns:

The derivative w.r.t. the policy weights

draw_action(state)

Sample an action in state using the policy.

Parameters:state (np.ndarray) – the state where the agent is.
Returns:The action sampled from the policy.
reset()

Useful when the policy needs a special initialization at the beginning of an episode.

class mushroom_rl.policy.gaussian_policy.StateLogStdGaussianPolicy(mu, log_std)[source]

Bases: mushroom_rl.policy.gaussian_policy.AbstractGaussianPolicy

Gaussian policy with learnable standard deviation. The Covariance matrix is constrained to be a diagonal matrix, the diagonal is computed by an exponential transformation of the logarithm of the standard deviation computed in each state. This is a differentiable policy for continuous action spaces. This policy is similar to the State std gaussian policy, but here the regressor represents the logarithm of the standard deviation.

__init__(mu, log_std)[source]

Constructor.

Parameters:
  • mu (Regressor) – the regressor representing the mean w.r.t. the state;
  • log_std (Regressor) – a regressor representing the logarithm of the variance w.r.t. the state. The output dimensionality of the regressor must be equal to the action dimensionality.
diff_log(state, action)[source]

Compute the gradient of the logarithm of the probability density function, in the specified state and action pair, i.e.:

\[\nabla_{\theta}\log p(s,a)\]
Parameters:
  • state (np.ndarray) – the state where the gradient is computed
  • action (np.ndarray) – the action where the gradient is computed
Returns:

The gradient of the logarithm of the pdf w.r.t. the policy weights

set_weights(weights)[source]

Setter.

Parameters:weights (np.ndarray) – the vector of the new weights to be used by the policy.
get_weights()[source]

Getter.

Returns:The current policy weights.
weights_size

Property.

Returns:The size of the policy weights.
__call__(state, action)

Compute the probability of taking action in a certain state following the policy.

Parameters:*args (list) – list containing a state or a state and an action.
Returns:The probability of all actions following the policy in the given state if the list contains only the state, else the probability of the given action in the given state following the policy. If the action space is continuous, state and action must be provided
diff(state, action)

Compute the derivative of the probability density function, in the specified state and action pair. Normally it is computed w.r.t. the derivative of the logarithm of the probability density function, exploiting the likelihood ratio trick, i.e.:

\[\nabla_{\theta}p(s,a)=p(s,a)\nabla_{\theta}\log p(s,a)\]
Parameters:
  • state (np.ndarray) – the state where the derivative is computed
  • action (np.ndarray) – the action where the derivative is computed
Returns:

The derivative w.r.t. the policy weights

draw_action(state)

Sample an action in state using the policy.

Parameters:state (np.ndarray) – the state where the agent is.
Returns:The action sampled from the policy.
reset()

Useful when the policy needs a special initialization at the beginning of an episode.

Noise policy

class mushroom_rl.policy.noise_policy.OrnsteinUhlenbeckPolicy(mu, sigma, theta, dt, x0=None)[source]

Bases: mushroom_rl.policy.policy.ParametricPolicy

Ornstein-Uhlenbeck process as implemented in: https://github.com/openai/baselines/blob/master/baselines/ddpg/noise.py.

This policy is commonly used in the Deep Deterministic Policy Gradient algorithm.

__init__(mu, sigma, theta, dt, x0=None)[source]

Constructor.

Parameters:
  • mu (Regressor) – the regressor representing the mean w.r.t. the state;
  • sigma (np.ndarray) – average magnitude of the random flactations per square-root time;
  • theta (float) – rate of mean reversion;
  • dt (float) – time interval;
  • x0 (np.ndarray, None) – initial values of noise.
__call__(state, action)[source]

Compute the probability of taking action in a certain state following the policy.

Parameters:*args (list) – list containing a state or a state and an action.
Returns:The probability of all actions following the policy in the given state if the list contains only the state, else the probability of the given action in the given state following the policy. If the action space is continuous, state and action must be provided
draw_action(state)[source]

Sample an action in state using the policy.

Parameters:state (np.ndarray) – the state where the agent is.
Returns:The action sampled from the policy.
set_weights(weights)[source]

Setter.

Parameters:weights (np.ndarray) – the vector of the new weights to be used by the policy.
get_weights()[source]

Getter.

Returns:The current policy weights.
weights_size

Property.

Returns:The size of the policy weights.
reset()[source]

Useful when the policy needs a special initialization at the beginning of an episode.

diff(state, action)

Compute the derivative of the probability density function, in the specified state and action pair. Normally it is computed w.r.t. the derivative of the logarithm of the probability density function, exploiting the likelihood ratio trick, i.e.:

\[\nabla_{\theta}p(s,a)=p(s,a)\nabla_{\theta}\log p(s,a)\]
Parameters:
  • state (np.ndarray) – the state where the derivative is computed
  • action (np.ndarray) – the action where the derivative is computed
Returns:

The derivative w.r.t. the policy weights

diff_log(state, action)

Compute the gradient of the logarithm of the probability density function, in the specified state and action pair, i.e.:

\[\nabla_{\theta}\log p(s,a)\]
Parameters:
  • state (np.ndarray) – the state where the gradient is computed
  • action (np.ndarray) – the action where the gradient is computed
Returns:

The gradient of the logarithm of the pdf w.r.t. the policy weights

TD policy

class mushroom_rl.policy.td_policy.TDPolicy[source]

Bases: mushroom_rl.policy.policy.Policy

__init__()[source]

Constructor.

set_q(approximator)[source]
Parameters:approximator (object) – the approximator to use.
get_q()[source]
Returns:The approximator used by the policy.
__call__(*args)

Compute the probability of taking action in a certain state following the policy.

Parameters:*args (list) – list containing a state or a state and an action.
Returns:The probability of all actions following the policy in the given state if the list contains only the state, else the probability of the given action in the given state following the policy. If the action space is continuous, state and action must be provided
draw_action(state)

Sample an action in state using the policy.

Parameters:state (np.ndarray) – the state where the agent is.
Returns:The action sampled from the policy.
reset()

Useful when the policy needs a special initialization at the beginning of an episode.

class mushroom_rl.policy.td_policy.EpsGreedy(epsilon)[source]

Bases: mushroom_rl.policy.td_policy.TDPolicy

Epsilon greedy policy.

__init__(epsilon)[source]

Constructor.

Parameters:epsilon (Parameter) – the exploration coefficient. It indicates the probability of performing a random actions in the current step.
__call__(*args)[source]

Compute the probability of taking action in a certain state following the policy.

Parameters:*args (list) – list containing a state or a state and an action.
Returns:The probability of all actions following the policy in the given state if the list contains only the state, else the probability of the given action in the given state following the policy. If the action space is continuous, state and action must be provided
draw_action(state)[source]

Sample an action in state using the policy.

Parameters:state (np.ndarray) – the state where the agent is.
Returns:The action sampled from the policy.
set_epsilon(epsilon)[source]

Setter.

Parameters:
  • epsilon (Parameter) – the exploration coefficient. It indicates the
  • of performing a random actions in the current step. (probability) –
update(*idx)[source]

Update the value of the epsilon parameter at the provided index (e.g. in case of different values of epsilon for each visited state according to the number of visits).

Parameters:*idx (list) – index of the parameter to be updated.
get_q()
Returns:The approximator used by the policy.
reset()

Useful when the policy needs a special initialization at the beginning of an episode.

set_q(approximator)
Parameters:approximator (object) – the approximator to use.
class mushroom_rl.policy.td_policy.Boltzmann(beta)[source]

Bases: mushroom_rl.policy.td_policy.TDPolicy

Boltzmann softmax policy.

__init__(beta)[source]

Constructor.

Parameters:
  • beta (Parameter) – the inverse of the temperature distribution. As
  • temperature approaches infinity, the policy becomes more and (the) –
  • random. As the temperature approaches 0.0, the policy becomes (more) –
  • and more greedy. (more) –
__call__(*args)[source]

Compute the probability of taking action in a certain state following the policy.

Parameters:*args (list) – list containing a state or a state and an action.
Returns:The probability of all actions following the policy in the given state if the list contains only the state, else the probability of the given action in the given state following the policy. If the action space is continuous, state and action must be provided
draw_action(state)[source]

Sample an action in state using the policy.

Parameters:state (np.ndarray) – the state where the agent is.
Returns:The action sampled from the policy.
set_beta(beta)[source]

Setter.

Parameters:beta (Parameter) – the inverse of the temperature distribution.
update(*idx)[source]

Update the value of the beta parameter at the provided index (e.g. in case of different values of beta for each visited state according to the number of visits).

Parameters:*idx (list) – index of the parameter to be updated.
get_q()
Returns:The approximator used by the policy.
reset()

Useful when the policy needs a special initialization at the beginning of an episode.

set_q(approximator)
Parameters:approximator (object) – the approximator to use.
class mushroom_rl.policy.td_policy.Mellowmax(omega, beta_min=-10.0, beta_max=10.0)[source]

Bases: mushroom_rl.policy.td_policy.Boltzmann

Mellowmax policy. “An Alternative Softmax Operator for Reinforcement Learning”. Asadi K. and Littman M.L.. 2017.

__init__(omega, beta_min=-10.0, beta_max=10.0)[source]

Constructor.

Parameters:
  • omega (Parameter) – the omega parameter of the policy from which beta of the Boltzmann policy is computed;
  • beta_min (float, -10.) – one end of the bracketing interval for minimization with Brent’s method;
  • beta_max (float, 10.) – the other end of the bracketing interval for minimization with Brent’s method.
set_beta(beta)[source]

Setter.

Parameters:beta (Parameter) – the inverse of the temperature distribution.
update(*idx)[source]

Update the value of the beta parameter at the provided index (e.g. in case of different values of beta for each visited state according to the number of visits).

Parameters:*idx (list) – index of the parameter to be updated.
__call__(*args)

Compute the probability of taking action in a certain state following the policy.

Parameters:*args (list) – list containing a state or a state and an action.
Returns:The probability of all actions following the policy in the given state if the list contains only the state, else the probability of the given action in the given state following the policy. If the action space is continuous, state and action must be provided
draw_action(state)

Sample an action in state using the policy.

Parameters:state (np.ndarray) – the state where the agent is.
Returns:The action sampled from the policy.
get_q()
Returns:The approximator used by the policy.
reset()

Useful when the policy needs a special initialization at the beginning of an episode.

set_q(approximator)
Parameters:approximator (object) – the approximator to use.

Torch policy

class mushroom_rl.policy.torch_policy.TorchPolicy(use_cuda)[source]

Bases: mushroom_rl.policy.policy.Policy

Interface for a generic PyTorch policy. A PyTorch policy is a policy implemented as a neural network using PyTorch. Functions ending with ‘_t’ use tensors as input, and also as output when required.

__init__(use_cuda)[source]

Constructor.

Parameters:use_cuda (bool) – whether to use cuda or not.
__call__(state, action)[source]

Compute the probability of taking action in a certain state following the policy.

Parameters:*args (list) – list containing a state or a state and an action.
Returns:The probability of all actions following the policy in the given state if the list contains only the state, else the probability of the given action in the given state following the policy. If the action space is continuous, state and action must be provided
draw_action(state)[source]

Sample an action in state using the policy.

Parameters:state (np.ndarray) – the state where the agent is.
Returns:The action sampled from the policy.
distribution(state)[source]

Compute the policy distribution in the given states.

Parameters:state (np.ndarray) – the set of states where the distribution is computed.
Returns:The torch distribution for the provided states.
entropy(state=None)[source]

Compute the entropy of the policy.

Parameters:state (np.ndarray, None) – the set of states to consider. If the entropy of the policy can be computed in closed form, then state can be None.
Returns:The value of the entropy of the policy.
draw_action_t(state)[source]

Draw an action given a tensor.

Parameters:state (torch.Tensor) – set of states.
Returns:The tensor of the actions to perform in each state.
log_prob_t(state, action)[source]

Compute the logarithm of the probability of taking action in state.

Parameters:
  • state (torch.Tensor) – set of states.
  • action (torch.Tensor) – set of actions.
Returns:

The tensor of log-probability.

entropy_t(state=None)[source]

Compute the entropy of the policy.

Parameters:state (torch.Tensor) – the set of states to consider. If the entropy of the policy can be computed in closed form, then state can be None.
Returns:The tensor value of the entropy of the policy.
distribution_t(state)[source]

Compute the policy distribution in the given states.

Parameters:state (torch.Tensor) – the set of states where the distribution is computed.
Returns:The torch distribution for the provided states.
set_weights(weights)[source]

Setter.

Parameters:weights (np.ndarray) – the vector of the new weights to be used by the policy.
get_weights()[source]

Getter.

Returns:The current policy weights.
parameters()[source]

Returns the trainable policy parameters, as expected by torch optimizers.

Returns:List of parameters to be optimized.
reset()[source]

Useful when the policy needs a special initialization at the beginning of an episode.

use_cuda

True if the policy is using cuda_tensors.

class mushroom_rl.policy.torch_policy.GaussianTorchPolicy(network, input_shape, output_shape, std_0=1.0, use_cuda=False, **params)[source]

Bases: mushroom_rl.policy.torch_policy.TorchPolicy

Torch policy implementing a Gaussian policy with trainable standard deviation. The standard deviation is not state-dependent.

__init__(network, input_shape, output_shape, std_0=1.0, use_cuda=False, **params)[source]

Constructor.

Parameters:
  • network (object) – the network class used to implement the mean regressor;
  • input_shape (tuple) – the shape of the state space;
  • output_shape (tuple) – the shape of the action space;
  • std_0 (float, 1.) – initial standard deviation;
  • params (dict) – parameters used by the network constructor.
draw_action_t(state)[source]

Draw an action given a tensor.

Parameters:state (torch.Tensor) – set of states.
Returns:The tensor of the actions to perform in each state.
log_prob_t(state, action)[source]

Compute the logarithm of the probability of taking action in state.

Parameters:
  • state (torch.Tensor) – set of states.
  • action (torch.Tensor) – set of actions.
Returns:

The tensor of log-probability.

entropy_t(state=None)[source]

Compute the entropy of the policy.

Parameters:state (torch.Tensor) – the set of states to consider. If the entropy of the policy can be computed in closed form, then state can be None.
Returns:The tensor value of the entropy of the policy.
distribution_t(state)[source]

Compute the policy distribution in the given states.

Parameters:state (torch.Tensor) – the set of states where the distribution is computed.
Returns:The torch distribution for the provided states.
set_weights(weights)[source]

Setter.

Parameters:weights (np.ndarray) – the vector of the new weights to be used by the policy.
get_weights()[source]

Getter.

Returns:The current policy weights.
parameters()[source]

Returns the trainable policy parameters, as expected by torch optimizers.

Returns:List of parameters to be optimized.
__call__(state, action)

Compute the probability of taking action in a certain state following the policy.

Parameters:*args (list) – list containing a state or a state and an action.
Returns:The probability of all actions following the policy in the given state if the list contains only the state, else the probability of the given action in the given state following the policy. If the action space is continuous, state and action must be provided
distribution(state)

Compute the policy distribution in the given states.

Parameters:state (np.ndarray) – the set of states where the distribution is computed.
Returns:The torch distribution for the provided states.
draw_action(state)

Sample an action in state using the policy.

Parameters:state (np.ndarray) – the state where the agent is.
Returns:The action sampled from the policy.
entropy(state=None)

Compute the entropy of the policy.

Parameters:state (np.ndarray, None) – the set of states to consider. If the entropy of the policy can be computed in closed form, then state can be None.
Returns:The value of the entropy of the policy.
reset()

Useful when the policy needs a special initialization at the beginning of an episode.

use_cuda

True if the policy is using cuda_tensors.