Ejemplo n.º 1
0
#
# @title          :probabilistic/prob_cifar/train_resnet_avb_pf.py
# @author         :ch
# @contact        :[email protected]
# @created        :01/30/2020
# @version        :1.0
# @python_version :3.6.9
"""
Train implicit posterior via AVB for prior-focused CIFAR-10/100 with Resnet-32
------------------------------------------------------------------------------

The script  :mod:`probabilistic.prob_cifar.train_resnet_avb_pf` is used to run a
probabilistic CL experiment on CIFAR using a Resnet-32
(:class:`mnets.resnet.ResNet`) and Adversarial-Variational-Bayes (AVB) as method
to learn a single posterior for all tasks sequentially. At the moment, it simply
takes care of providing the correct command-line arguments and default values to
the end user. Afterwards, it will simply call:
:mod:`probabilistic.prob_cifar.train_avb`.
"""
# Do not delete the following import for all executable scripts!
import __init__ # pylint: disable=unused-import

from probabilistic.prob_mnist import train_args
from probabilistic.prob_cifar import train_avb

if __name__ == '__main__':
    config = train_args.parse_cmd_arguments(mode='cifar_resnet_avb_pf')

    train_avb.run(config, experiment='cifar_resnet_avb_pf')

# limitations under the License.
#
# @title          :probabilistic/prob_mnist/train_split_avb_pf.py
# @author         :ch
# @contact        :[email protected]
# @created        :01/30/2020
# @version        :1.0
# @python_version :3.6.9
"""
Train implicit posterior via AVB for prior-focused SplitMNIST
----------------------------------------------------------------

The script :mod:`probabilistic.prob_mnist.train_split_avb_pf` is used to run
experiments on SplitMNIST. At the moment, it simply takes care of providing the
correct command-line arguments and default values to the end user while simply
calling: :mod:`probabilistic.prob_cifar.train_avb`, which will train a single
posterior for all tasks sequentially using the prior-focused CL approach
(i.e., the posterior of the previous task becomes the prior of the current
task).
"""
# Do not delete the following import for all executable scripts!
import __init__  # pylint: disable=unused-import

from probabilistic.prob_mnist import train_args
from probabilistic.prob_cifar import train_avb

if __name__ == '__main__':
    config = train_args.parse_cmd_arguments(mode='split_mnist_avb_pf')

    train_avb.run(config, experiment='split_mnist_avb_pf')
-----------------------------------------------------------------

In this script, we train a target network via variational inference, where the
variational family is NOT restricted to a set of Gaussian distributions with
diagonal covariance matrix (as in
:mod:`probabilistic.prob_mnist.train_bbb`).
For the training we use an implicit method, the training method for this case
is described in

    Shi, Jiaxin, Shengyang Sun, and Jun Zhu. "A spectral approach to gradient 
    estimation for implicit distributions." ICML, 2018.
    https://arxiv.org/abs/1806.02925

Specifically, we use a hypernetwork to output the weights for the target
network of each task in a continual learning setup, where tasks are presented
sequentially and forgetting of previous tasks is prevented by the
regularizer proposed in

    https://arxiv.org/abs/1906.00695
"""
# Do not delete the following import for all executable scripts!
import __init__  # pylint: disable=unused-import

from probabilistic.prob_mnist import train_args
from probabilistic.prob_cifar import train_avb

if __name__ == '__main__':
    config = train_args.parse_cmd_arguments(mode='split_mnist_ssge')

    train_avb.run(config, experiment='split_mnist_ssge')
---------------------------------------------------------------------------

In this script, we train a target network via variational inference, where the
variational family is NOT restricted to a set of Gaussian distributions with
diagonal covariance matrix (as in
:mod:`probabilistic.prob_mnist.train_bbb`).
For the training we use an implicit method, the training method for this case
is described in

    Shi, Jiaxin, Shengyang Sun, and Jun Zhu. "A spectral approach to gradient 
    estimation for implicit distributions." ICML, 2018.
    https://arxiv.org/abs/1806.02925

Specifically, we use a hypernetwork to output the weights for the target
network of each task in a continual learning setup, where tasks are presented
sequentially and forgetting of previous tasks is prevented by the
regularizer proposed in

    https://arxiv.org/abs/1906.00695
"""
# Do not delete the following import for all executable scripts!
import __init__  # pylint: disable=unused-import

from probabilistic.prob_mnist import train_args
from probabilistic.prob_cifar import train_avb

if __name__ == '__main__':
    config = train_args.parse_cmd_arguments(mode='cifar_resnet_ssge')

    train_avb.run(config, experiment='cifar_resnet_ssge')
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# @title          :probabilistic/prob_mnist/train_perm_avb_pf.py
# @author         :ch
# @contact        :[email protected]
# @created        :01/30/2020
# @version        :1.0
# @python_version :3.6.9
"""
Train implicit posterior via AVB for prior-focused PermutedMNIST
----------------------------------------------------------------

The script :mod:`probabilistic.prob_mnist.train_perm_avb_pf` is used to run
experiments on PermutedMNIST. It's role is analogous to the one of the script
:mod:`probabilistic.prob_mnist.train_split_avb_pf`.
"""
# Do not delete the following import for all executable scripts!
import __init__  # pylint: disable=unused-import

from probabilistic.prob_mnist import train_args
from probabilistic.prob_cifar import train_avb

if __name__ == '__main__':
    config = train_args.parse_cmd_arguments(mode='perm_mnist_avb_pf')

    train_avb.run(config, experiment='perm_mnist_avb_pf')
# limitations under the License.
#
# @title          :probabilistic/prob_cifar/train_zenke_avb_pf.py
# @author         :ch
# @contact        :[email protected]
# @created        :01/30/2020
# @version        :1.0
# @python_version :3.6.9
"""
Train implicit posterior via AVB for prior-focused CIFAR-10/100 with ZenkeNet
-----------------------------------------------------------------------------

The script  :mod:`probabilistic.prob_cifar.train_resnet_avb_pf` is used to run a
probabilistic CL experiment on CIFAR using a ZenkeNet
(:class:`mnets.zenkenet.ZenkeNet`) and Adversarial-Variational-Bayes (AVB) as
method to learn a single posterior for all tasks sequentially. At the moment, it
simply takes care of providing the correct command-line arguments and default
values to the end user. Afterwards, it will simply call:
:mod:`probabilistic.prob_cifar.train_avb`.
"""
# Do not delete the following import for all executable scripts!
import __init__ # pylint: disable=unused-import

from probabilistic.prob_mnist import train_args
from probabilistic.prob_cifar import train_avb

if __name__ == '__main__':
    config = train_args.parse_cmd_arguments(mode='cifar_zenke_avb_pf')

    train_avb.run(config, experiment='cifar_zenke_avb_pf')
Ejemplo n.º 7
0
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# @title          :probabilistic/prob_gmm/train_gmm_ssge_pf.py
# @author         :ch
# @contact        :[email protected]
# @created        :03/10/2020
# @version        :1.0
# @python_version :3.6.10
"""
Train implicit posterior via SSGE for prior-focused CL with GMM tasks
---------------------------------------------------------------------

The script  :mod:`probabilistic.prob_gmm.train_gmm_ssge_pf` is used to run a
probabilistic CL experiment on a toy classification problem using synthetic
data (:class:`data.special.GMMData`). Spectral Stein Gradient Estimator (SSGE)
is used to learn a single posterior for all tasks sequentially.
"""
# Do not delete the following import for all executable scripts!
import __init__ # pylint: disable=unused-import

from probabilistic.prob_cifar import train_avb
from probabilistic.prob_mnist import train_args

if __name__ == '__main__':
    config = train_args.parse_cmd_arguments(mode='gmm_ssge_pf')

    train_avb.run(config, experiment='gmm_ssge_pf')

Ejemplo n.º 8
0
-----------------------------------------------------------------

In this script, we train a target network via variational inference, where the
variational family is NOT restricted to a set of Gaussian distributions with
diagonal covariance matrix (as in
:mod:`probabilistic.prob_mnist.train_bbb`).
For the training we use an implicit method, the training method for this case
is described in

    Shi, Jiaxin, Shengyang Sun, and Jun Zhu. "A spectral approach to gradient 
    estimation for implicit distributions." ICML, 2018.
    https://arxiv.org/abs/1806.02925

Specifically, we use a hypernetwork to output the weights for the target
network of each task in a continual learning setup, where tasks are presented
sequentially and forgetting of previous tasks is prevented by the
regularizer proposed in

    https://arxiv.org/abs/1906.00695
"""
# Do not delete the following import for all executable scripts!
import __init__  # pylint: disable=unused-import

from probabilistic.prob_mnist import train_args
from probabilistic.prob_cifar import train_avb

if __name__ == '__main__':
    config = train_args.parse_cmd_arguments(mode='perm_mnist_ssge')

    train_avb.run(config, experiment='perm_mnist_ssge')
# @title          :probabilistic/prob_gmm/train_gmm_avb.py
# @author         :ch
# @contact        :[email protected]
# @created        :03/10/2020
# @version        :1.0
# @python_version :3.6.10
"""
Train implicit per-task posteriors for GMM tasks with AVB
---------------------------------------------------------

The script  :mod:`probabilistic.prob_gmm.train_gmm_avb` is used to run a
probabilistic CL experiment on a toy classification problem using synthetic
data (:class:`data.special.GMMData`). Adversarial-Variational-Bayes (AVB) is
used to learn task-specific weight posteriors. At the moment, the script simply
takes care of providing the correct command-line arguments and default values to
the end user. Afterwards, it will simply call:
:mod:`probabilistic.prob_mnist.train_avb`.

See :ref:`prob-gmm-avb-readme-reference-label` for usage instructions.
"""
# Do not delete the following import for all executable scripts!
import __init__  # pylint: disable=unused-import

from probabilistic.prob_cifar import train_avb
from probabilistic.prob_mnist import train_args

if __name__ == '__main__':
    config = train_args.parse_cmd_arguments(mode='gmm_avb')

    train_avb.run(config, experiment='gmm_avb')