Feature Selection using chemml.optimization.GeneticAlgorithm

We use a sample dataset from ChemML library which has the SMILES codes and Dragon molecular descriptors for 500 small organic molecules with their densities in \(kg/m^3\). For simplicity, we perform feature selection using Genetic Algorithm on a subset of 20 molecular descriptors.

For more information on Genetic Algorithm, please refer to our paper

[1]:
from chemml.datasets import load_organic_density
_,density,features = load_organic_density()
features = features.iloc[:,:20]
print(density.shape, features.shape)
density, features = density.values, features.values
(500, 1) (500, 20)

Defining hyperparameter space

For this, each individual feature is encoded as a binary bit of the chromosome.

0 indicates feature is discarded.

1 indicates feature is selected.

[2]:
space = tuple([{i: {'choice': [0,1]}} for i in range(features.shape[1])])

Defining objective function

The objective function is defined as a function that receives one ‘individual’ of the genetic algorithm’s population that is an ordered list of the hyperparameters defined in the space variable. Within the objective function, the user does all the required calculations and returns the metric (as a tuple) that is supposed to be optimized. If multiple metrics are returned, all the metrics are optimized according to the fitness defined in the initialization of the Genetic Algorithm class.

Here, we use a simple linear regression model to fit the data.

[3]:
from sklearn.metrics import mean_absolute_error
import pandas as pd
from sklearn.linear_model import LinearRegression
def obj(individual, features=features):
    df = pd.DataFrame(features)
    new_cols = list(map(bool, individual))
    df = df[df.columns[new_cols]]
    features = df.values
    ridge = LinearRegression(n_jobs=1)
    ridge.fit(features[:400], density[:400])
    pred = ridge.predict(features[400:])
    return mean_absolute_error(density[400:], pred)

Optimize the feature space

[4]:
from chemml.optimization import GeneticAlgorithm
ga = GeneticAlgorithm(evaluate=obj, space=space, fitness=("min", ), crossover_type="Uniform",
                        pop_size = 10, crossover_size=6, mutation_size=4, algorithm=3)
fitness_df, final_best_features = ga.search(n_generations=2)

ga.search returns:

  • a dataframe with the best individuals of each generation along with their fitness values and the time taken to evaluate the model

  • a dictionary containing the best individual (in this case the top features)

[5]:
fitness_df
[5]:
Best_individual Fitness_values Time (hours)
0 (1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, ... 14.037435 0.000541
1 (0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, ... 11.537817 0.000542
[6]:
print(final_best_features)
{0: 0, 1: 1, 2: 1, 3: 1, 4: 0, 5: 1, 6: 0, 7: 1, 8: 1, 9: 1, 10: 1, 11: 1, 12: 1, 13: 1, 14: 0, 15: 1, 16: 0, 17: 1, 18: 1, 19: 1}