Attendees: Bharath, Daiki, Seyone, Dilip, Sean, Peter, Karl, Venkat, Pat
Summary: Bharath spent most of the last week working on documentation. He put up a PR that overhauled the deepchem documentation. He also merged in a small PR improving documentation for the metrics, a small PR improving MoleculeNet documentation, and a small PR fixing layers documentation. The new documentation is now up live on https://deepchem.readthedocs.io/en/latest/. Bharath’s plan for the coming week is to make the final fixes on the docking PR
Sean has continued working on the DeepChem to Julia port. He’s sped up the weave implementation in Julia. The implementation is now 80-100x faster than it was previously, but it’s still 10x slower than the Python implementation. Dilip has been working with Sean on the Julia port. Bharath asked if the Julia code was using GPU yet, and Venkat answered that the underlying framework Flux still has rudimentary GPU support only, but that should be changing over the coming months.
Daiki put up a PR to fix some of the DeepChem on colab installation issues. With this PR in place, it’s now possible to run OpenMM on Colab which we were previously having some trouble with. Daiki has also put up a WIP PR overhauling the docker script which was outdated.
Seyone’s been working on a graph attention implementation in PyTorch. He hopes to get that implemented and tested, then move on to the Keras implementation. He mentioned that he also hoped to take a look at the docking PR when he got a bit of time. Bharath mentioned that with Daiki’s latest OpenMM colab install fixes, it should be possible to run FEP calculations on colab for Seyone’s other projects.
Peter’s been working on other projects this week and hasn’t had time to work on DeepChem.
Karl joined in to learn more about the JaxChem work. Bharath and Karl will be mentoring Daiki on the GSoC work for the summer. Bharath said he’d set up some offline time to synch up about JaxChem progress.
Pat was able to join this week since the new time is more convenient. Pat’s excited to learn more about the progress of the structure based discovery tools that are coming up and volunteered to help test once they’re ready. Bharath said that new docs and the new structure based improvements should be up soon.
Venkat joined the call for the first time and gave a brief intro about himself. He’s a professor at CMU and is advising Dilip and Sean. Their team is working to build a Julia implementation of DeepChem and also does work applying techniques like crystal graph convolutions to materials design.
Since there was a bit of time left on the call, the conversation circled back to the Julia implementation. It looks like for now, some of the matrix operations are still slower than Numpy’s operations (which have likely been heavily optimized). Bharath mentioned that it might be useful to look at DGL’s or PyTorch Geometric’s graph conv implementations to see how they’re done. Peter mentioned that he’d looked at DGL’s code base a bit. It looks like their system has layers of abstractions so it’s not clear which part is contributing to their speed improvements over DeepChem graph convs. Our latest optimizations should have closed the gap a bit, but it seems that DGL’s graph convs are still faster now.
As a quick reminder to anyone reading along, the DeepChem developer calls are open to the public! If you’re interested in attending, please send an email to X.Y@gmail.com, where X=bharath, Y=ramsundar.