{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "## Feature Selection using `chemml.optimization.GeneticAlgorithm`\n", "\n", "\n", "We use a sample dataset from ChemML library which has the SMILES codes and Dragon molecular descriptors for 500 small organic molecules with their densities in $kg/m^3$. For simplicity, we perform feature selection using Genetic Algorithm on a subset of 20 molecular descriptors. \n", "\n", "For more information on Genetic Algorithm, please refer to our [paper](https://doi.org/10.26434/chemrxiv.9782387.v1) " ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "(500, 1) (500, 20)\n" ] } ], "source": [ "from chemml.datasets import load_organic_density\n", "_,density,features = load_organic_density()\n", "features = features.iloc[:,:20]\n", "print(density.shape, features.shape)\n", "density, features = density.values, features.values" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Defining hyperparameter space\n", "\n", "For this, each individual feature is encoded as a binary bit of the chromosome. \n", "\n", "0 indicates feature is discarded.\n", "\n", "1 indicates feature is selected." ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "space = tuple([{i: {'choice': [0,1]}} for i in range(features.shape[1])])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Defining objective function\n", "The objective function is defined as a function that receives one ‘individual’ of the genetic algorithm’s population that is an ordered list of the hyperparameters defined in the space variable. Within the objective function, the user does all the required calculations and returns the metric (as a tuple) that is supposed to be optimized. If multiple metrics are returned, all the metrics are optimized according to the fitness defined in the initialization of the Genetic Algorithm class.\n", "\n", "Here, we use a simple linear regression model to fit the data." ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "from sklearn.metrics import mean_absolute_error\n", "import pandas as pd\n", "from sklearn.linear_model import LinearRegression\n", "def obj(individual, features=features):\n", " df = pd.DataFrame(features)\n", " new_cols = list(map(bool, individual))\n", " df = df[df.columns[new_cols]]\n", " features = df.values\n", " ridge = LinearRegression(n_jobs=1)\n", " ridge.fit(features[:400], density[:400])\n", " pred = ridge.predict(features[400:])\n", " return mean_absolute_error(density[400:], pred)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Optimize the feature space" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "from chemml.optimization import GeneticAlgorithm\n", "ga = GeneticAlgorithm(evaluate=obj, space=space, fitness=(\"min\", ), crossover_type=\"Uniform\",\n", " pop_size = 10, crossover_size=6, mutation_size=4, algorithm=3)\n", "fitness_df, final_best_features = ga.search(n_generations=2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`ga.search` returns:\n", "\n", "- a dataframe with the best individuals of each generation along with their fitness values and the time taken to evaluate the model\n", "\n", "- a dictionary containing the best individual (in this case the top features) " ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "scrolled": true }, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
Best_individualFitness_valuesTime (hours)
0(1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, ...14.0374350.000541
1(0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, ...11.5378170.000542
\n", "
" ], "text/plain": [ " Best_individual Fitness_values \\\n", "0 (1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, ... 14.037435 \n", "1 (0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, ... 11.537817 \n", "\n", " Time (hours) \n", "0 0.000541 \n", "1 0.000542 " ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "fitness_df" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "{0: 0, 1: 1, 2: 1, 3: 1, 4: 0, 5: 1, 6: 0, 7: 1, 8: 1, 9: 1, 10: 1, 11: 1, 12: 1, 13: 1, 14: 0, 15: 1, 16: 0, 17: 1, 18: 1, 19: 1}\n" ] } ], "source": [ "print(final_best_features)" ] } ], "metadata": { "kernelspec": { "display_name": "Python [conda env:v2_0.7]", "language": "python", "name": "conda-env-v2_0.7-py" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.8" }, "widgets": { "application/vnd.jupyter.widget-state+json": { "state": {}, "version_major": 2, "version_minor": 0 } } }, "nbformat": 4, "nbformat_minor": 2 }