Clustering and Dimension Reduction Models
supports the following machine learning models for clustering and dimension reduction.
Model Type: PRINCIPAL COMPONENT ANALYSIS
Principal Component Analysis (PCA) is typically not used as a model on its own. PCA is most commonly used on the inputs to other models. PCA serves two purposes.
- PCA normalizes all numeric feature data. Some types of models are sensitive to the scale of numeric features, and when different features have different scales, the results end up skewed. PCA normalizes all features to the same scale.
- PCA is used for dimensionality reduction. PCA computes linear combinations of the original features to put the most signal into a smaller number of new features.
The input result set when creating a PCA model is N numeric columns that are all features. There is no label or dependent variable. After you create a PCA model, the sys.principal_component_analysis_models catalog table contains information on the percentage of the signal that is in each PCA feature. You can use this information to figure out how many of the output features to keep.
You can also use PCA models as inputs to other models. For example, if you have three features and you want to use PCA to reduce the number to two features, you can execute the following SQL statements.
You can use this model as input for another model, for example, logistic regression.
To correctly use this model later, you must pass the original features through the PCA model when you execute the logistic regression model.
Similarly, to create a PCA analysis over four variables, execute this SQL statement:
Model Type: KMEANS
K-means is an unsupervised clustering algorithm. All of the columns in the input result set are features and there is no label. All the input columns must be numeric. The algorithm finds k points such that all points are classified by the closest k points. Distance calculations are Euclidean by default.
Because there are no labels for clusters, when you execute this function after training with the same number (and same order) of features as input, the result is an integer that specifies the cluster to which the point belongs.
k - This option must be a positive integer. k specifies the algorithm how many clusters to make.
epsilon - If you specify this option, the value must be a valid positive floating point value. When the maximum distance that a centroid moves from one iteration of the algorithm to the next is less than this value, the algorithm terminates. This parameter defaults to 1e-8.
Model Type: GAUSSIAN MIXTURE MODEL
Gaussian Mixture Models (GMMs) find the mixture of N Gaussian distributions that best fit the data. The most common use is unsupervised clustering, but there are some key differences compared to K-means clustering:
- GMMs can handle clusters that are not circular, i.e. clusters in the shape of ovals or higher dimension analogues with variances in different directions.
- Clusters can have an arbitrary rotation, i.e. they can have covariance.
- GMMs can handle instances when clusters are not as equally likely. If a point is right between two clusters, it is more likely to be more common in the training data.
- GMMs can show the probability of a new point belonging to each of the clusters instead of providing only a single cluster value.
In general, GMMs can handle more complex data than K-means models, but they require more training and processing time during execution.
When you execute this function after training with the same number (and same order) of features as input, the result is an integer that specifies the probable cluster to which the point belongs.
numDistributions - Must be a positive integer. This value specifies the number of clusters of Gaussian distributions for the model to make.
epsilon - If you specify this option, the value must be a valid positive floating point value. When the maximum distance that the entire best model moves in its n-dimensional space is less than this value, the algorithm terminates. This parameter defaults to 1e-8 (0.00000001 as a decimal number).
Model Type: LINEAR DISCRIMINANT ANALYSIS
Linear Discriminant Analysis (LDA) is a dimension-reduction technique similar to PCA. While PCA is unsupervised, LDA can classify and use class labels to find linear combinations of features that best separate the classes.
The input result set when you create an LDA model is N numeric columns that are all features. The last column is a class label and can be any data type. After you create an LDA model, the sys.linear_discriminant_analysis_models system catalog table contains information on the percentage of the signal that is in each LDA feature. You can use this information to figure out how many of the output features to keep.
You can use LDA models as inputs to other models. For example, to reduce the number of features from three to two using LDA, you can execute these SQL statements.
You can use this model as input for another model, for example, logistic regression.
To correctly use this model later, you must pass the original features through the LDA model when you execute the logistic regression model.
Similarly, to create an LDA model over four variables, execute this SQL statement.
When you execute the trained LDA model, you must provide the same original input features in the same order, followed by a positive integer argument that specifies the LDA component to return. The LDA component index starts at 1.
None