Neural Fingerprints

We create atom, bond, and edge tensors from molecule SMILES using chemml.chem.tensorize_molecules in order to build neural fingerprints using chemml.models.NeuralGraphHidden and chemml.models.NeuralGraphOutput modules. These neural fingerprints are then used as features to train a simple feed forward neural network to predict densities of small organic compounds using tensorflow.

Here we import a sample dataset from ChemML library which has the SMILES codes for 500 small organic molecules with their densities in \(kg/m^3\).

[1]:
import numpy as np
from chemml.datasets import load_organic_density
molecules, target, dragon_subset = load_organic_density()
target = np.asarray(target['density_Kg/m3'])
2021-11-09 18:22:23.552207: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2021-11-09 18:22:23.552291: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
/mnt/c/Aatish/UB/Mr. Hachmann/master_chemml_wrapper_v2/chemml/chemml/datasets/base.py:87: FutureWarning: In a future version of pandas all arguments of DataFrame.drop except for the argument 'labels' will be keyword-only
  features = df.drop(['smiles', 'density_Kg/m3'],1)

Building chemml.chem.Molecule objects from molecule SMILES.

[2]:
from chemml.chem import Molecule
mol_objs_list = []
for smi in molecules['smiles']:
    mol = Molecule(smi, 'smiles')
    mol.hydrogens('add')
    mol.to_xyz('MMFF', maxIters=10000, mmffVariant='MMFF94s')
    mol_objs_list.append(mol)

Molecule tensors can be used to create neural graph fingerprints using chemml.models

[3]:
from chemml.chem import tensorise_molecules
xatoms, xbonds, xedges = tensorise_molecules(molecules=mol_objs_list, max_degree=5,
                                        max_atoms=None, n_jobs=-1, batch_size=100, verbose=True)
Tensorising molecules in batches of 100 ...
500/500 [==================================================] - 1s 1ms/step
Merging batch tensors ...    [DONE]

Splitting and preprocessing the data

[4]:
from sklearn.model_selection import ShuffleSplit
from sklearn.preprocessing import StandardScaler
y_scale = StandardScaler()
rs = ShuffleSplit(n_splits=1, test_size=.20, random_state=42)

for train, test in rs.split(mol_objs_list):
    xatoms_train = xatoms[train]
    xatoms_test = xatoms[test]
    xbonds_train = xbonds[train]
    xbonds_test = xbonds[test]
    xedges_train = xedges[train]
    xedges_test = xedges[test]
    target_train = target[train]
    target_test = target[test]
    target_train = y_scale.fit_transform(target_train.reshape(-1,1))
[5]:
print('Training data:\n')
print('Atoms: ',xatoms_train.shape)
print('Bonds: ',xbonds_train.shape)
print('Edges: ',xedges_train.shape)
print('Target: ',target_train.shape)

print('\nTesting data:\n')
print('Atoms: ',xatoms_test.shape)
print('Bonds: ',xbonds_test.shape)
print('Edges: ',xedges_test.shape)
print('Target: ',target_test.shape)
Training data:

Atoms:  (400, 57, 62)
Bonds:  (400, 57, 5, 6)
Edges:  (400, 57, 5)
Target:  (400, 1)

Testing data:

Atoms:  (100, 57, 62)
Bonds:  (100, 57, 5, 6)
Edges:  (100, 57, 5)
Target:  (100,)

Building the Neural Fingerprints

The atom, bond, and edge tensors are used here to build 200 neural fingerprints of width 8 (i.e., the size atomic neighborhood which will be considered in the convolution process).

[6]:
from chemml.models import NeuralGraphHidden, NeuralGraphOutput
from tensorflow.keras.layers import Input, add
import tensorflow as tf
tf.random.set_seed(42)

conv_width = 8
fp_length = 200

num_molecules = xatoms_train.shape[0]
max_atoms = xatoms_train.shape[1]
max_degree = xbonds_train.shape[2]
num_atom_features = xatoms_train.shape[-1]
num_bond_features = xbonds_train.shape[-1]

# Creating input layers for atoms ,bonds and edge information
atoms0 = Input(name='atom_inputs', shape=(max_atoms, num_atom_features),batch_size=None)
bonds = Input(name='bond_inputs', shape=(max_atoms, max_degree, num_bond_features),batch_size=None)
edges = Input(name='edge_inputs', shape=(max_atoms, max_degree), dtype='int32',batch_size=None)

# Defining the convolved atom feature layers
atoms1 = NeuralGraphHidden(conv_width, activation='relu', use_bias=False)([atoms0, bonds, edges])
atoms2 = NeuralGraphHidden(conv_width, activation='relu', use_bias=False)([atoms1, bonds, edges])

# Defining the outputs of each (convolved) atom feature layer to fingerprint
fp_out0 = NeuralGraphOutput(fp_length, activation='softmax')([atoms0,bonds,edges])
fp_out1 = NeuralGraphOutput(fp_length, activation='softmax')([atoms1,bonds,edges])
fp_out2 = NeuralGraphOutput(fp_length, activation='softmax')([atoms2,bonds,edges])

# Sum outputs to obtain fingerprint
final_fp = add([fp_out0, fp_out1, fp_out2])
print('Neural Fingerprint Shape: ',final_fp.shape)
2021-11-09 18:23:03.337511: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory
2021-11-09 18:23:03.337613: W tensorflow/stream_executor/cuda/cuda_driver.cc:269] failed call to cuInit: UNKNOWN ERROR (303)
2021-11-09 18:23:03.337645: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (Aatish-HP): /proc/driver/nvidia/version does not exist
2021-11-09 18:23:03.337984: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
Neural Fingerprint Shape:  (None, 200)

Building and training the neural network

Here, we build and train a simple feed forward neural network using tensorflow.keras and provide our neural fingerprints as features.

[7]:
from tensorflow.keras import Model
from tensorflow.keras.layers import Dense

# Build and compile model for regression.
dense_layer0 = Dense(128,activation='relu',name='dense_layer0',
                     kernel_regularizer=tf.keras.regularizers.l2(0.01))(final_fp)
dense_layer1 = Dense(64,activation='relu',name='dense_layer1',
                     kernel_regularizer=tf.keras.regularizers.l2(0.01))(dense_layer0)
dense_layer2 = Dense(32,activation='relu',name='dense_layer2',
                     kernel_regularizer=tf.keras.regularizers.l2(0.01))(dense_layer1)

main_prediction = Dense(1, activation='linear', name='main_prediction')(dense_layer1)
model = Model(inputs=[atoms0, bonds, edges], outputs=[main_prediction])
model.compile(optimizer='adam', loss='mae')

# Show summary
model.summary()

model.fit([xatoms_train, xbonds_train, xedges_train], target_train, epochs=50,
          steps_per_epoch=None, batch_size=None,verbose=False,validation_split=0.1)
Model: "model"
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to
==================================================================================================
atom_inputs (InputLayer)        [(None, 57, 62)]     0
__________________________________________________________________________________________________
bond_inputs (InputLayer)        [(None, 57, 5, 6)]   0
__________________________________________________________________________________________________
edge_inputs (InputLayer)        [(None, 57, 5)]      0
__________________________________________________________________________________________________
neural_graph_hidden (NeuralGrap (None, 57, 8)        2720        atom_inputs[0][0]
                                                                 bond_inputs[0][0]
                                                                 edge_inputs[0][0]
__________________________________________________________________________________________________
neural_graph_hidden_1 (NeuralGr (None, 57, 8)        560         neural_graph_hidden[0][0]
                                                                 bond_inputs[0][0]
                                                                 edge_inputs[0][0]
__________________________________________________________________________________________________
neural_graph_output (NeuralGrap (None, 200)          13800       atom_inputs[0][0]
                                                                 bond_inputs[0][0]
                                                                 edge_inputs[0][0]
__________________________________________________________________________________________________
neural_graph_output_1 (NeuralGr (None, 200)          3000        neural_graph_hidden[0][0]
                                                                 bond_inputs[0][0]
                                                                 edge_inputs[0][0]
__________________________________________________________________________________________________
neural_graph_output_2 (NeuralGr (None, 200)          3000        neural_graph_hidden_1[0][0]
                                                                 bond_inputs[0][0]
                                                                 edge_inputs[0][0]
__________________________________________________________________________________________________
add (Add)                       (None, 200)          0           neural_graph_output[0][0]
                                                                 neural_graph_output_1[0][0]
                                                                 neural_graph_output_2[0][0]
__________________________________________________________________________________________________
dense_layer0 (Dense)            (None, 128)          25728       add[0][0]
__________________________________________________________________________________________________
dense_layer1 (Dense)            (None, 64)           8256        dense_layer0[0][0]
__________________________________________________________________________________________________
main_prediction (Dense)         (None, 1)            65          dense_layer1[0][0]
==================================================================================================
Total params: 57,129
Trainable params: 57,129
Non-trainable params: 0
__________________________________________________________________________________________________
2021-11-09 18:23:04.812435: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:185] None of the MLIR Optimization Passes are enabled (registered 2)
[7]:
<keras.callbacks.History at 0x7feda4198be0>

Predicting the density of the molecules in our test data and evaluating our model based on it.

[8]:
from chemml.utils import regression_metrics

y_pred = model.predict([xatoms_test,xbonds_test,xedges_test])
y_pred = y_scale.inverse_transform(y_pred)
metrics_df = regression_metrics(target_test, list(y_pred.reshape(-1,)))
mae = metrics_df['MAE'].values[0]
r_2 = metrics_df['r_squared'].values[0]

print("Mean Absolute Error = {} kg/m^3".format(mae.round(3)))
print("R squared = {}".format(r_2.round(3)))
Mean Absolute Error = 16.518 kg/m^3
R squared = 0.935