How to migrate from TensorFlow to PyTorch?

I lok at this tutorial:

https://colab.research.google.com/github/deepchem/deepchem/blob/master/examples/tutorials/The_Basic_Tools_of_the_Deep_Life_Sciences.ipynb

Then I change
!pip install --pre deepchem[tensorflow]
to
!pip install --pre deepchem

Then I change
model = dc.models.GraphConvModel(n_tasks=1, mode=‘regression’, dropout=0.2,batch_normalize=False)
to
num_features = train_dataset.X[0].get_atom_features().shape[1]
model = dc.models.torch_models.GraphConvModel(n_tasks=len(tasks), mode=‘regression’, dropout=0.2, number_input_features=[num_features,64], batch_normalize=False)

Then I run model.fit and get a very bad result.

Where am I wrong?

would you mind at least sharing what “bad result” you had?

It works correctly now. Apparently, I did something wrong a month ago.

1 Like

import deepchem as dc
import numpy as np # Import numpy

Load your dataset (replace with your actual data loading)

tasks, datasets, transformers = dc.molnet.load_tox21(featurizer=‘GraphConv’, split=‘index’)
train_dataset, valid_dataset, test_dataset = datasets

Get the number of atom features

num_features = train_dataset.X[0].get_atom_features().shape[1]
print(f"Number of atom features: {num_features}")

Initialize the GraphConvModel (PyTorch version)

model = dc.models.torch_models.GraphConvModel(
n_tasks=len(tasks),
mode=‘regression’, # or ‘classification’
dropout=0.2,
number_input_features=num_features, # Pass the integer directly
batch_normalize=False
)

Train the model

model.fit(train_dataset, nb_epoch=10) # Adjust nb_epoch as needed

Evaluate the model

metric = dc.metrics.Metric(dc.metrics.pearson_r2_score, task_averaging_mode=‘macro’)
train_scores = model.evaluate(train_dataset, [metric], transformers)
valid_scores = model.evaluate(valid_dataset, [metric], transformers)

print(f"Train scores: {train_scores}")
print(f"Validation scores: {valid_scores}")

Try it.