TensorFLow or PyTorch? Both!
GraphGallery is a gallery of state-of-the-arts graph neural networks for TensorFlow 2.x and PyTorch. GraphGallery 0.4.0 is a total re-write from previous versions, and some things have changed.
- Build from source (latest version)
git clone https://github.com/EdisonLeeeee/GraphGallery.git
cd GraphGallery
python setup.py install
- Or using pip (stable version)
pip install -U graphgallery
In detail, the following methods are currently implemented:
- ChebyNet from Michaรซl Defferrard et al, ๐Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering, NIPS'16. [๐ TF]
- GCN from Thomas N. Kipf et al, ๐Semi-Supervised Classification with Graph Convolutional Networks, ICLR'17. [๐ TF], [๐ฅ Torch]
- GraphSAGE from William L. Hamilton et al, ๐Inductive Representation Learning on Large Graphs, NIPS'17. [๐ TF]
- FastGCN from Jie Chen et al, FastGCN: Fast Learning with Graph Convolutional Networks via Importance Sampling, ICLR'18. [๐ TF]
- LGCN from Hongyang Gao et al, ๐Large-Scale Learnable Graph Convolutional Networks, KDD'18. [๐ TF]
- GAT from Petar Veliฤkoviฤ et al, ๐Graph Attention Networks, ICLR'18. ), [๐ TF], [๐ฅ Torch]
- SGC from Felix Wu et al, ๐Simplifying Graph Convolutional Networks, ICML'19. [๐ TF], [๐ฅ Torch]
- GWNN from Bingbing Xu et al, ๐Graph Wavelet Neural Network, ICLR'19. [๐ TF]
- GMNN from Meng Qu et al, ๐Graph Markov Neural Networks, ICML'19. [๐ TF]
- ClusterGCN from Wei-Lin Chiang et al, ๐Cluster-GCN: An Efficient Algorithm for Training Deep and Large Graph Convolutional Networks, KDD'19. [๐ TF], [๐ฅ Torch]
- DAGNN from Meng Liu et al, ๐Towards Deeper Graph Neural Networks, KDD'20. [๐ TF]
- RobustGCN from Dingyuan Zhu et al, ๐Robust Graph Convolutional Networks Against Adversarial Attacks, KDD'19. [๐ TF]
- SBVAT/OBVAT from Zhijie Deng et al, ๐Batch Virtual Adversarial Training for Graph Convolutional Networks, ICML'19. [๐ TF], [๐ TF]
- Deepwalk from Bryan Perozzi et al, ๐DeepWalk: Online Learning of Social Representations, KDD'14. [๐ TF]
- Node2vec from Aditya Grover et al, ๐node2vec: Scalable Feature Learning for Networks, KDD'16. [๐ TF]
from graphgallery.data import Planetoid
# set `verbose=False` to avoid these printed tables
data = Planetoid('cora', verbose=False)
graph = data.graph
idx_train, idx_val, idx_test = data.split()
# idx_train: training indices: 1D Numpy array
# idx_val: validation indices: 1D Numpy array
# idx_test: testing indices: 1D Numpy array
>>> graph
Graph(adj_matrix(2708, 2708), attr_matrix(2708, 2708), labels(2708,))
currently the supported datasets are:
>>> data.supported_datasets
('citeseer', 'cora', 'pubmed')
from graphgallery.nn.models import GCN
model = GCN(graph, attr_transform="normalize_attr", device="CPU", seed=123)
# build your GCN model with default hyper-parameters
model.build()
# train your model. here idx_train and idx_val are numpy arrays
his = model.train(idx_train, idx_val, verbose=1, epochs=100)
# test your model
loss, accuracy = model.test(idx_test)
print(f'Test loss {loss:.5}, Test accuracy {accuracy:.2%}')
On Cora
dataset:
<Loss = 1.0161 Acc = 0.9500 Val_Loss = 1.4101 Val_Acc = 0.7740 >: 100%|โโโโโโโโโโ| 100/100 [00:01<00:00, 118.02it/s]
Test loss 1.4123, Test accuracy 81.20%
- Build your model you can use the following statement to build your model
# one hidden layer with hidden units 32 and activation function RELU
>>> model.build(hiddens=32, activations='relu')
# two hidden layer with hidden units 32, 64 and all activation functions are RELU
>>> model.build(hiddens=[32, 64], activations='relu')
# two hidden layer with hidden units 32, 64 and activation functions RELU and ELU
>>> model.build(hiddens=[32, 64], activations=['relu', 'elu'])
- Train your model
# train with validation
>>> his = model.train(idx_train, idx_val, verbose=1, epochs=100)
# train without validation
>>> his = model.train(idx_train, verbose=1, epochs=100)
here his
is tensorflow History
(like) instance.
- Test you model
>>> loss, accuracy = model.test(idx_test)
>>> print(f'Test loss {loss:.5}, Test accuracy {accuracy:.2%}')
Test loss 1.4124, Test accuracy 81.20%
NOTE: you must install SciencePlots package for a better preview.
import matplotlib.pyplot as plt
with plt.style.context(['science', 'no-latex']):
fig, axes = plt.subplots(1, 2, figsize=(15, 5))
axes[0].plot(his.history['acc'], label='Train accuracy')
axes[0].plot(his.history['val_acc'], label='Val accuracy')
axes[0].set_xlabel('Epochs')
axes[0].legend()
axes[1].plot(his.history['loss'], label='Training loss')
axes[1].plot(his.history['val_loss'], label='Validation loss')
axes[1].set_xlabel('Epochs')
axes[1].legend()
plt.autoscale(tight=True)
plt.show()
>>> import graphgallery
>>> graphgallery.backend()
TensorFlow 2.1.0 Backend
>>> graphgallery.set_backend("pytorch")
PyTorch 1.6.0+cu101 Backend
GCN using PyTorch backend
# The following codes are the same with TensorFlow Backend
>>> from graphgallery.nn.models import GCN
>>> model = GCN(graph, attr_transform="normalize_attr", device="GPU", seed=123);
>>> model.build()
>>> his = model.train(idx_train, idx_val, verbose=1, epochs=100)
<Loss = 0.6813 Acc = 0.9214 Val_Loss = 1.0506 Val_Acc = 0.7820 >: 100%|โโโโโโโโโโ| 100/100 [00:00<00:00, 173.95it/s]
>>> loss, accuracy = model.test(idx_test)
>>> print(f'Test loss {loss:.5}, Test accuracy {accuracy:.2%}')
Test loss 1013.1, Test accuracy 82.20%
TODO
TODO
Please refer to the examples directory.
- Add Docstrings and Documentation (Building)
- Add PyTorch models support
- Support for
graph Classification
andlink prediction
tasks - Support for Heterogeneous graphs
This project is motivated by Pytorch Geometric, Tensorflow Geometric and Stellargraph, and the original implementations of the authors, thanks for their excellent works!