コード例 #1
0
def exact_solver(img, alpha, mu0, rho):
    size = np.shape(img)
    iterations = 50

    Z = np.zeros((2 * size[0], size[1]))
    G = Z
    k = 0
    mu = mu0

    W = make_weight_matrix.make_weight_matrix(img, 5)

    while k < iterations:
        U = Z / mu
        A = alpha * W / mu
        T = T_solver.T_solver(img, mu, G, U)
        delT = D.D(T)
        G = G_solver.G_solver(A, (delT + U))
        B = delT - G
        Z = mu * (B + U)
        mu = mu * rho
        k = k + 1

    return T
def make_weight_matrix(img, ker_size):
    size = np.shape(img)
    p = size[0] * size[1]

    D_img = D.D(img)
    D_img_vec = np.reshape(D_img, (2 * p, 1), order='F')

    dtx = D_img_vec[0:p]
    dty = D_img_vec[p:2 * p]

    w_gauss = gaussian_filter.gaussian_filter((ker_size, 1), 2)
    w_gauss = np.reshape(w_gauss, w_gauss.size, order='F')
    dtx = np.reshape(dtx, dtx.size, order='F')
    dty = np.reshape(dty, dty.size, order='F')
    convl_x = conv.conv(dtx, w_gauss)
    convl_y = conv.conv(dty, w_gauss)

    w_x = 1.0 / (np.absolute(convl_x) + 0.0001)
    w_y = 1.0 / (np.absolute(convl_y) + 0.0001)

    W_vec = np.concatenate((w_x, w_y))
    W = np.reshape(W_vec, (2 * size[0], size[1]), order='F')

    return W
コード例 #3
0
ファイル: 4_GAN.py プロジェクト: samph4/TrainingBook
This example will be more in-depth than the first few, but a lot of the principles that we have already applied also apply here. As always, we'll go through it step by step and I'll do my best to explain each part so that it makes sense and is as easy to follow as I can make it. In this final example, we will be looking at Generative Adversarial Networks - affectionately known as GANs. The concept of GANs were first introduced by Ian Goodfellow and his team in 2014 (https://arxiv.org/abs/1406.2661), where they "proposed a new framework for estimating generative models via an an adversarial process". I'll get into this in much more detail, but essentially what is happening here is that we are going to train two neural networks (that will be adversaries), that will compete against one another in order to improve. One will be reffered to as the Discriminator and the other will be known as the Generator. We combine both of these networks to form a combined model known as the GAN for training. Once training has been completed, we want to be able to use the *trained* Generator network independently to generate new things!

![Image](./Figures/gan2.png)

The image above looks rather unassuming, it is simply a row of portraits of four different people. The interesting thing however, is that none of these people actually exist. They are not real. Each of these images has been generated by a Generative Adversarial Network known as StyleGAN. StyleGAN is a sophisticated GAN that has been curated and trained by NVIDIA and represents the state-of-the-art results in data-driven unconditional generative image modelling and is an impressive testament as to the possibilities of Generative Networks. Here is another video that demonstrates the capabilities of these methods (which is only 2 minutes long so I recommend you watch it because it's v cool) - https://www.youtube.com/watch?v=p5U4NgVGAwg. With that being said, lets take a closer look as to how these things actually work.

## Generative Adversarial Networks

![Image](./Figures/gan1.png)

The Generative Adversarial Network is a framework for estimating generative models via an adversarial process in which two neural networks compete against each other during training. It is a useful machine learning technique that learns to generate fake samples indistinguishable from real ones via a competitive game. Whilst this may sound a little confusing, the GAN is nothing more than a combined model where two neural networks are joined together; these are known as the Discriminator $D$ and the Generator $G$. The Discriminator $D$ is a classification network that is set up to maximise the probability of assigning the correct label to real (label 1) or fake (label 0) samples. Meanwhile, the Generator $G$ is trying to fool $D$ and generate new samples that the Discriminator believes came from the training set. Mathematically speaking, this corresponds to the following two-player minimax game with value function $V(G,D)$:

![Image](./Figures/minmax.png)

Where $x$ is the input to $D$ from the training set, $z$ is a vector of latent values input to $G$, $E_x$ is the expected value overall real data instances, $D(x)$ is the Discriminator’s estimate of the probability that real data instance $x$ is real, $E_z$ is the expected value over all random inputs to the generator and $D(G(z))$ is the discriminator’s estimate of the probability that a fake instance is real. The diagram above should help this bit make sense. To reiterate, the primary goal of $G$ is to fool $D$ and generate new samples that $D$ believes came from the training set (real). The primary goal of $D$ is to correctly classify real/fake samples by assigning a label of 0 to generated samples indicating a fake, and a label of 1 to true samples indicating that it is real and came from the training set. The training procedure for $G$ is to maximise the probablity of $D$ making a mistake i.e. an incorrect classification. In the space of arbitrary functions $G$ and $D$, a unique solution exists, with $G$ able to reproduce data with the same distribution as the training set and the output from $D$ ≈ 0.5 for all samples, which simply indicates that the discriminator can no longer differentiate between the training data and the data generated by $G$. Or in other words, the Generator $G$ has got so good at generating 'fake' data that $D$ can no longer tell the difference. The image below is taken from Google's training documentation about GANs and is worth a read as it (obviously) does a good job at explaining some of these concepts (https://developers.google.com/machine-learning/gan).

![Image](./Figures/forge.png)

~


## Training Set

First of all, we need to decide what we want our generative network to generate. Of course, NVIDIA's sophisticated StyleGAN is capable of generating human faces, but GANs are capable of generating new data regardless of the form that it comes in. GANs can be used to generate new audio signals, new images, new time-series data etc. GANs are capable of generating new data that is representative of the data that it was trained on (the training set). Therefore, in large, a key factor in the success of the GAN model lies in the quality of the training set. In this example, we will create a simple training set from the function $y=sin(x)$ and use the trained generator to produce similar values!

```{note}
Throughout this example I may use terms such as 'real' and 'fake' when referring to data. Real refers to data samples that come from the training set and 'fake' samples refer to any data that is produced by the Generator.
```

### Import Libraries
コード例 #4
0
import D

oggi = D.D(25, 2, 2021)
oggi.out()

#oggi.mod(30,2,2021)

#oggi = D.D(1,25,2021)