Attendees: Bharath, Daiki, Seyone, Peter, Karl
Summary: Bharath’s been continuing work on factoring in changes from the refactoring PR. He merged in a set of example changes PR and pulled out a new WIP PR that brings in some fixes for hyperparameter optimization. Bharath also put up an issue laying out what’s needed for the 2.4.0 release. We’ve had growing numbers of build problems, so Bharath suggested that we make the 2.4.0 cut once the docking PR and WIP hyperparam PR are merged in. He’s also suggesting that we move to a new monthly minor release cycle. Since development pace has picked up, this will help reduce the pressure on new builds.
Peter noted that we’d need to move to a more automated build system in order to be able to make monthly minor releases. There’s some machinery in the feedstock repo already which might help. Bharath agreed and said he’d start working on build automation once the 2.4.0 PRs were in. Karl noted that there’s some old documentation for releases already that might be useful to bring in.
Bharath also brought up a proposal he made in a recent issue about DeepChe refactoring. Our original plan when we started refactoring was to split DeepChem up into multiple repositories. The MoleculeNet repository would hold all the non-TensorFlow parts of DeepChem, and the original DeepChem repository would hold the TensorFlow parts. Bharath tried making these changes in a pair of PRs (1, 2), but it got very gnarly and hard to maintain. Bharath proposed perhaps we should try to keep the core modules in the DeepChem repository and migrate the TensorFlow parts into a new tensorchem repo. He proposed using import guards to turn TensorFlow into a soft dependency as a step towards this and asked for feedback on this design change.
Peter suggested that if the TensorFlow import guards work well, it might make sense to develop DeepChem as a monorepo. We could use import guards around PyTorch and Jax imports so that DeepChem development is focused on a monorepo with TensorFlow/PyTorch/Jax sub-packages. Bharath noted that a number of open source projects like ray do this already. This has the advantage of focusing development and visibility on one core repo which could help attract more contributors. Karl noted this would help users who develop with IDEs since it would be easier to look up code in a monorepo. He also noted we’d need to make sure we had a clean automated release process which could generate different wheels for different packages, and tests to make sure dependencies were cleanly separated. Bharath said he’d take a look at this when he’s working on overhauling the releases.
Daiki’s been working on a set of improvements to the DeepChem docker setup and updated the tutorials to fix some installation issues. He’s also got a draft PR up that adds a first Jax graph convolutional implementation.
Seyone’s been busy with some other work this week, but had some time to work on the graph attention implementation. It’s not done yet, but is coming along. Seyone also mentioned that he could take a look at the docking PR once Bharath had it merged in.
Karl spent some time catching up on the development that’s happened in the last couple years and mentioned that he could help provide feedback on the release process as that starts to come together.
Karl asked whether it would be possible to benchmark the Jaxchem implementation to see how it does on speed and performance compared with the original TensorFlow implementation in DeepChem. Daiki mentioned he’d try to run some experiments and see how things compared. Bharath mentioned that Jax was more functional than object oriented, while the rest of DeepChem is very object oriented. Daiki mentioned he’d taken a look at haiku and was considering whether it would make sense to use it. Bharath thought it might be useful to bring the Jaxchem API in line with the rest of the DeepChem API.
As a quick reminder to anyone reading along, the DeepChem developer calls are open to the public! If you’re interested in attending, please send an email to X.Y@gmail.com, where X=bharath, Y=ramsundar.