6.4 Summary

Numeric features, in their raw form, may or may not be in an effective form that allows a model to find a good relationship with the response. For example, a predictor may be measured, but its squared version is what is truly related to the response. Straightforward transformations such as centering, scaling, or transforming a distribution to symmetry are necessary steps for some models to be able to identify predictive signal. Other transformations such as basis expansions and splines can translate a preditor in its original scale to nonlinear scales that may be informative.

But expanding the predictor space comes at a computational cost. Another way to explore nonlinear relationshipbs between predictors and the response is through the combination of a kernel function and PCA. This approach is very computationally efficient and enables the exploration of much larger dimensional space.

Instead of expanding the predictor space, it may be necessary to reduce the dimension of the predictors. This can be accomplished using unspervised techniques such as PCA, ICA, NNMF, or a supervised approach like PLS.

Finally, autoencoders, the spatial sign transformation, or distance and depth measures offer novel engineering approaches that can harness information in unlabeled data or dampen the effect of extreme samples.