Feature Selection using chemml.optimization.GeneticAlgorithm

We use a sample dataset from ChemML library which has the SMILES codes and Dragon molecular descriptors for 500 small organic molecules with their densities in \(kg/m^3\). For simplicity, we perform feature selection using Genetic Algorithm on a subset of 20 molecular descriptors.

For more information on Genetic Algorithm, please refer to our paper

[1]:
from chemml.datasets import load_organic_density
_,density,features = load_organic_density()
num_cols = 20
col_names = features.iloc[:, :num_cols].columns
features = features.iloc[:,:num_cols]
print(density.shape, features.shape)
density, features = density.values, features.values
(500, 1) (500, 20)

Defining hyperparameter space

For this, each individual feature is encoded as a binary bit of the chromosome.

0 indicates feature is discarded.

1 indicates feature is selected.

[2]:
space = tuple([{i: {'choice': [0,1]}} for i in range(features.shape[1])])

Defining objective function

The objective function is defined as a function that receives one ‘individual’ of the genetic algorithm’s population that is an ordered list of the hyperparameters defined in the space variable. Within the objective function, the user does all the required calculations and returns the metric (as a tuple) that is supposed to be optimized. If multiple metrics are returned, all the metrics are optimized according to the fitness defined in the initialization of the Genetic Algorithm class.

Here, we use a simple linear regression model to fit the data.

[3]:
from sklearn.metrics import mean_absolute_error
import pandas as pd
from sklearn.linear_model import LinearRegression
def obj(individual, features=features):
    df = pd.DataFrame(features)
    new_cols = list(map(bool, individual))
    df = df[df.columns[new_cols]]
    features = df.values
    ridge = LinearRegression(n_jobs=1)
    ridge.fit(features[:400], density[:400])
    pred = ridge.predict(features[400:])
    return mean_absolute_error(density[400:], pred)

Optimize the feature space

[4]:
from chemml.optimization import GeneticAlgorithm
ga = GeneticAlgorithm(evaluate=obj, space=space, fitness=("min", ), crossover_type="Uniform",
                        pop_size = 10, crossover_size=6, mutation_size=4, algorithm=3)
fitness_df, final_best_features = ga.search(n_generations=20)

ga.search returns:

  • a dataframe with the best individuals of each generation along with their fitness values and the time taken to evaluate the model

  • a dictionary containing the best individual (in this case the top features)

[5]:
fitness_df
[5]:
Best_individual Fitness_values Time (hours)
0 (1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, ... 14.231872 0.000518
1 (1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, ... 13.511623 0.000494
2 (1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, ... 11.969715 0.000498
3 (1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, ... 11.969715 0.000511
4 (1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, ... 11.969715 0.000506
5 (1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, ... 11.969715 0.000495
6 (1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, ... 11.969715 0.000482
7 (1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, ... 11.626140 0.000487
8 (1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, ... 11.626140 0.000493
9 (1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, ... 11.536057 0.000490
10 (1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, ... 11.536057 0.000487
11 (1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, ... 11.405468 0.000486
12 (1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, ... 11.405468 0.000480
13 (1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, ... 11.405468 0.000484
14 (1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, ... 11.405468 0.000492
15 (1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, ... 11.405468 0.000489
16 (1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, ... 11.405468 0.000487
17 (1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, ... 11.405468 0.000494
18 (1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, ... 11.405468 0.000504
19 (1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, ... 11.404044 0.000476
[6]:
print(final_best_features)
{0: 1, 1: 1, 2: 1, 3: 0, 4: 1, 5: 1, 6: 1, 7: 1, 8: 1, 9: 1, 10: 1, 11: 0, 12: 0, 13: 1, 14: 1, 15: 1, 16: 1, 17: 0, 18: 0, 19: 1}
[8]:
# Printing out which columns were selected

df = pd.DataFrame(features, columns=col_names)
new_cols = list(map(bool, tuple([v for k,v in final_best_features.items()])))
print(df[df.columns[new_cols]])
         MW    AMW      Sv      Sp      Si     Mv     Me     Mp     Mi     GD  \
0    285.54  7.932  22.324  25.801  39.995  0.620  0.980  0.717  1.111  0.140
1    240.24  9.240  18.569  18.455  29.242  0.714  1.032  0.710  1.125  0.131
2    313.42  8.471  24.643  25.938  41.697  0.666  1.009  0.701  1.127  0.108
3    218.32  9.492  15.153  16.466  25.544  0.659  1.024  0.716  1.111  0.179
4    319.45  9.396  23.303  24.824  38.202  0.685  1.015  0.730  1.124  0.114
..      ...    ...     ...     ...     ...    ...    ...    ...    ...    ...
495  296.36  7.799  24.676  25.500  42.903  0.649  1.010  0.671  1.129  0.108
496  328.50  8.645  25.621  27.887  42.324  0.674  0.996  0.734  1.114  0.108
497  373.52  8.120  30.982  33.006  51.318  0.674  0.995  0.718  1.116  0.088
498  323.50  8.513  24.158  26.142  43.404  0.636  1.007  0.688  1.142  0.114
499  348.42  8.103  30.445  31.796  47.112  0.708  0.993  0.739  1.096  0.088

     nTA   nBT   nBO   nBM    RBF
0    0.0  38.0  19.0   5.0  0.079
1    1.0  28.0  20.0  17.0  0.071
2    0.0  40.0  25.0  16.0  0.075
3    2.0  24.0  14.0   5.0  0.042
4    0.0  37.0  24.0  15.0  0.081
..   ...   ...   ...   ...    ...
495  0.0  41.0  25.0  16.0  0.073
496  1.0  41.0  25.0  16.0  0.049
497  0.0  50.0  31.0  22.0  0.060
498  0.0  41.0  24.0  10.0  0.073
499  0.0  47.0  31.0  28.0  0.064

[500 rows x 15 columns]