The processing elements of an artificial neural network apply a transfer function to the weighted sum of their inputs. A very commonly used transfer function is the sigmoid. It is shown that the recently published idea of changing the socalled scaling parameter of this function during training of the network is in effect identical to two well-known techniques in function fitting: shaking the parameters to be fitted and adjusting the learning parameter. The effect of modifying the scaling parameter is understood and explained.