MNIST_identification with CNN
accuracy rate = 99.13%
figure out which is the best suit me
Not Started
tensorflow K-means algorithm implement
MNIST identification with CNN then using K-means to clustering
Still can't understand how to use K-means to build reduce fuction
Rewrite Cifar10 with newest tensorflow API (1.2.0-rc1) and create a pull request to google, hope they will merge my code
read steps
- tf.Reader
- tf.image
- tf.train.string_input_producer
- tf.get_variable tf.Variable tf.scope
- weight_decay (conceptual)
- weight_decay (coding)
- tf.truncated_normal
- tf.l2_loss
- tf.multiply(step 1, step 2)
- tf.add_to_collection
- use collection
- bias_add
- loss_function
- train
Accuracy: 0.86
Add a drop out layer
Accuracy: 0.91
Fixed a predict problem
Accuracy: 0.93
Figuring out how to optimize
Use Batch normalization to optimize predict Accuracy
Trying to understand
- gan
- dc-gan
- ac-gan( basic-usage )
- ac-gan with cnn
- Understanding transfer theory
- Basic usage
- Understanding fast style transfer theory
- Fast Style Transfer usage
- train_sets download -> me
- coding -> me
- make test_sets -> me and teammate
- train
- optimize
Failed because of output image is too big and sample sets is not enough
Optimize it in 2 days
[x] stock LSTM prediction use LSTM neural network to predict stock data
[X] VAE
[x] vocal track AutoEncoder
- V1.0 basic AutoEncoder model
- V1.1 batch_norm
- [] Generator