Beispiel #1
0
 def test_25(self):
     U = np.random.randn(5, 10)
     B, S, C = linalg.pca(U, centre=True)
     assert np.linalg.norm(B.dot(B.T) - np.eye(U.shape[0])) < 1e-10
Beispiel #2
0
img = img[16:-17, 16:-17, 0:200:2]
img /= img.max()

np.random.seed(12345)
imgn = signal.spnoise(img, 0.33)
"""
We use a product dictionary :cite:`garcia-2018-convolutional2` constructed from a single-channel convolutional dictionary for the spatial axes of the image, and a truncated PCA basis for the spectral axis of the image. The impulse denoising problem is solved by appending an additional filter to the learned dictionary ``D0``, which is one of those distributed with SPORCO. This additional component consist of an impulse filters that will represent the low frequency image components when used together with a gradient penalty on the coefficient maps, as discussed below. The PCA basis is computed from the noise-free ground-truth image since the primary purpose of this script is as a code usage example: in a real application, the PCA basis would be estimated from a relevant noise-free image, or could be estimated from the noisy image via Robust PCA.
"""

D0 = util.convdicts()['G:8x8x32']
Di = np.zeros(D0.shape[0:2] + (1, ), dtype=np.float32)
Di[0, 0] = 1.0
D = np.concatenate((Di, D0), axis=2)

S = img.reshape((-1, img.shape[-1])).T
pcaB, pcaS, pcaC = pca(S, centre=False)
B = pcaB[:, 0:20]
"""
The problem is solved using class :class:`.admm.pdcsc.ConvProdDictL1L1GrdJoint`, which implements a convolutional sparse coding problem with a product dictionary :cite:`garcia-2018-convolutional2`, an :math:`\ell_1` data fidelity term, an :math:`\ell_1` regularisation term, and an additional gradient regularization term :cite:`wohlberg-2016-convolutional2`, as defined above. The regularization parameters for the $\ell_1$ and gradient terms are ``lmbda`` and ``mu`` respectively. Setting correct weighting arrays for these regularization terms is critical to obtaining good performance. For the $\ell_1$ norm, the weights on the filters that are intended to represent low frequency components are set to zero (we only want them penalised by the gradient term), and the weights of the remaining filters are set to zero. For the gradient penalty, all weights are set to zero except for those corresponding to the filters intended to represent low frequency components, which are set to unity.
"""

lmbda = 4.2e0
mu = 9.5e0
"""
Set up weights for the $\ell_1$ norm to disable regularization of the coefficient map corresponding to the impulse filter.
"""

wl1 = np.ones((1, ) * 4 + (D.shape[2], ), dtype=np.float32)
wl1[..., 0] = 0.0
"""
Set of weights for the $\ell_2$ norm of the gradient to disable regularization of all coefficient maps except for the one corresponding to the impulse filter.
Beispiel #3
0
img = img[16:-17, 16:-17, 0:200:2]
img /= img.max()

np.random.seed(12345)
imgn = util.spnoise(img, 0.33)
"""
We use a product dictionary :cite:`garcia-2018-convolutional2` constructed from a single-channel convolutional dictionary for the spatial axes of the image, and a truncated PCA basis for the spectral axis of the image. The impulse denoising problem is solved by appending an additional filter to the learned dictionary ``D0``, which is one of those distributed with SPORCO. This additional component consist of an impulse filters that will represent the low frequency image components when used together with a gradient penalty on the coefficient maps, as discussed below. The PCA basis is computed from the noise-free ground-truth image since the primary purpose of this script is as a code usage example: in a real application, the PCA basis would be estimated from a relevant noise-free image, or could be estimated from the noisy image via Robust PCA.
"""

D0 = util.convdicts()['G:8x8x32']
Di = np.zeros(D0.shape[0:2] + (1, ), dtype=np.float32)
Di[0, 0] = 1.0
D = np.concatenate((Di, D0), axis=2)

S = img.reshape((-1, img.shape[-1])).T
pcaB, pcaS, pcaC = sl.pca(S, centre=False)
B = pcaB[:, 0:20]
"""
The problem is solved using class :class:`.admm.pdcsc.ConvProdDictL1L1GrdJoint`, which implements a convolutional sparse coding problem with a product dictionary :cite:`garcia-2018-convolutional2`, an :math:`\ell_1` data fidelity term, an :math:`\ell_1` regularisation term, and an additional gradient regularization term :cite:`wohlberg-2016-convolutional2`, as defined above. The regularization parameters for the $\ell_1$ and gradient terms are ``lmbda`` and ``mu`` respectively. Setting correct weighting arrays for these regularization terms is critical to obtaining good performance. For the $\ell_1$ norm, the weights on the filters that are intended to represent low frequency components are set to zero (we only want them penalised by the gradient term), and the weights of the remaining filters are set to zero. For the gradient penalty, all weights are set to zero except for those corresponding to the filters intended to represent low frequency components, which are set to unity.
"""

lmbda = 4.2e0
mu = 9.5e0
"""
Set up weights for the $\ell_1$ norm to disable regularization of the coefficient map corresponding to the impulse filter.
"""

wl1 = np.ones((1, ) * 4 + (D.shape[2], ), dtype=np.float32)
wl1[..., 0] = 0.0
"""
Set of weights for the $\ell_2$ norm of the gradient to disable regularization of all coefficient maps except for the one corresponding to the impulse filter.