Categorical predictors can take a variety of forms in the data that is to be modeled. With the exception of tree-based models, categorical predictors must first be converted to numeric representations to enable other models to use the information . The most simplistic feature engineering technique for a categorical predictor is to convert each category to a separate binary dummy predictor. But even this basic conversion comes with a primary caution that some models require one fewer dummy predictors than the number of categories. Creating dummy predictors may not be the most effective way of extracting predictive information from a categorical predictor. If, for instance, the predictor has ordered categories, then other techniques such as linear or polynomial contrasts may be better related to the outcome.
Text fields, too, can be viewed as an agglomeration of categorical predictors and must be converted to numeric values. There have been a recent host of approaches for converting text to numeric. Often these approaches must include a filtering step to remove highly frequent, non-descriptive words.
Done well, feature engineering for categorical predictors can unlock important predictive information related to the outcome. The next chapter will focus on using feature engineering techniques to uncover additional predictive information within continuous predictors.