Hubbry Logo
search
search button
Sign in
Historyarrow-down
starMorearrow-down
Hubbry Logo
search
search button
Sign in
Softplus
Community hub for the Wikipedia article
logoWikipedian hub
Welcome to the community hub built on top of the Softplus Wikipedia article. Here, you can discuss, collect, and organize anything related to Softplus. The purpose of the hub is to connect people, foster deeper knowledge, and help improve the root Wikipedia article.
Add your contribution
Inside this hub
Softplus
Plot of the softplus function and the ramp function

In mathematics and machine learning, the softplus function is

It is a smooth approximation (in fact, an analytic function) to the ramp function, which is known as the rectifier or ReLU (rectified linear unit) in machine learning. For large negative it is , so just above 0, while for large positive it is , so just above .

The names softplus[1][2] and SmoothReLU[3] are used in machine learning. The name "softplus" (2000), by analogy with the earlier softmax (1989) is presumably because it is a smooth (soft) approximation of the positive part of x, which is sometimes denoted with a superscript plus, .

Alternative forms

[edit]

This function can be approximated as:

By making the change of variables , this is equivalent to

A sharpness parameter may be included:

[edit]

The derivative of softplus is the logistic function:

The logistic function or the sigmoid function is a smooth approximation of the rectifier, the Heaviside step function.

LogSumExp

[edit]

The multivariable generalization of single-variable softplus is the LogSumExp with the first argument set to zero:

The LogSumExp function is

and its gradient is the softmax; the softmax with the first argument set to zero is the multivariable generalization of the logistic function. Both LogSumExp and softmax are used in machine learning.

Convex conjugate

[edit]

The convex conjugate (specifically, the Legendre transformation) of the softplus function is the negative binary entropy (with base e). This is because (following the definition of the Legendre transformation: the derivatives are inverse functions) the derivative of softplus is the logistic function, whose inverse function is the logit, which is the derivative of negative binary entropy.

Softplus can be interpreted as logistic loss (as a positive number), so, by duality, minimizing logistic loss corresponds to maximizing entropy. This justifies the principle of maximum entropy as loss minimization.

References

[edit]
Add your contribution
Related Hubs