コード例 #1
0
# ============== Perceptron testing
from matplotlib.pyplot import plot, show
from scipy.linalg import norm
from perceptron import Perceptron, generate_data, GENERATE_TEST_SET_AMOUNT, ITERATION_MAX

perceptron = Perceptron()
perceptron.print_training_data()

test_data_arr = generate_data(GENERATE_TEST_SET_AMOUNT)  # test set generation


error_count = 0
# Perceptron test
for data_raw in test_data_arr:
    resp = perceptron.activation(data_raw)
    if resp != data_raw[-1]:  # if the response is not correct
        error_count += 1
    if resp == 1:
        plot(data_raw[0], data_raw[1], 'ob')
    else:
        plot(data_raw[0], data_raw[1], 'or')


# ========== PRINT INFO
print("error % =", (error_count / ITERATION_MAX) * 100)
perceptron.print_weights_info()

vector_length = norm(perceptron.weights)
normalized_vector_arr = []
for weight in perceptron.weights:
    normalized_vector_arr.append(weight / vector_length)
コード例 #2
0
ファイル: Perceptron.py プロジェクト: guanxv/Codecademy_note
            weighted_sum += self.weights[i] * inputs[i]
        return weighted_sum

    def activation(self, weighted_sum):
        if weighted_sum >= 0:
            return 1
        elif weighted_sum < 0:
            return -1

        #Complete this method


cool_perceptron = Perceptron()
print(cool_perceptron.weighted_sum([24, 55]))

print(cool_perceptron.activation(52))
'''
PERCEPTRON
Training the Perceptron
Our perceptron can now make a prediction given inputs, but how do we know if it gets those predictions right?

Right now we expect the perceptron to be very bad because it has random weights. We haven’t taught it anything yet, so we can’t expect it to get classifications correct! The good news is that we can train the perceptron to produce better and better results! In order to do this, we provide the perceptron a training set — a collection of random inputs with correctly predicted outputs.

On the right, you can see a plot of scattered points with positive and negative labels. This is a simple training set.

In the code, the training set has been represented as a dictionary with coordinates as keys and labels as values. For example:

training_set = {(18, 49): -1, (2, 17): 1, (24, 35): -1, (14, 26): 1, (17, 34): -1}
We can measure the perceptron’s actual performance against this training set. By doing so, we get a sense of “how bad” the perceptron is. The goal is to gradually nudge the perceptron — by slightly changing its weights — towards a better version of itself that correctly matches all the input-output pairs in the training set.

We will use these points to train the perceptron to correctly separate the positive labels from the negative labels by visualizing the perceptron as a line. Stay tuned!