Exemplo n.º 1
0
#  In this part, you will get to try different values of lambda and
#  see how regularization affects the decision coundart
#
#  Try the following values of lambda (0, 1, 10, 100).
#
#  How does the decision boundary change when you vary lambda? How does
#  the training set accuracy vary?
#

# Initialize fitting parameters
initial_theta = np.zeros(X.shape[1])

# Set regularization parameter lambda to 1
Lambda = 1.
result = optimize(initial_theta, X, y, Lambda)
theta = result.x
cost = result.fun
# Print theta to screen
print('Lambda: %f', Lambda)
print('Cost at theta found by scipy: %f' % cost)
print('Theta:', theta)

# Plot Boundary
plotDecisionBoundary(theta, X, y)
plt.title(r'$\lambda$ = ' + str(Lambda))

# Labels and Legend
plt.xlabel('Microchip Test 1')
plt.ylabel('Microchip Test 2')
plt.show()
print('theta:')
print('\t[{:.3f}, {:.3f}, {:.3f}]'.format(*theta))
print('Expected theta (approx):\n\t[-25.161, 0.206, 0.201]')

# Once `optimize.minimize` completes, we want to use the final value for $\theta$ to visualize the decision boundary on the training data as shown in the figure below.
#
# ![](Figures/decision_boundary1.png)
#
# To do so, we have written a function plotDecisionBoundary for plotting the decision boundary on top of training data. You do not need to write any code for plotting the decision boundary, but we also encourage you to look at the code in plotDecisionBoundary to see how to plot such a boundary using the  𝜃  values. You can find this function in the utils.py file which comes with this assignment.
#

# In[12]:

# Plot Boundary
utils.plotDecisionBoundary(plotData, theta, X, y)

# <a id="section4"></a>
# #### 1.2.4 Evaluating logistic regression
#
# After learning the parameters, you can use the model to predict whether a particular student will be admitted. For a student with an Exam 1 score of 45 and an Exam 2 score of 85, you should expect to see an admission
# probability of 0.776. Another way to evaluate the quality of the parameters we have found is to see how well the learned model predicts on our training set. In this part, your task is to complete the code in function `predict`. The predict function will produce “1” or “0” predictions given a dataset and a learned parameter vector $\theta$.
# <a id="predict"></a>

# In[13]:


def predict(theta, X):
    """
    Predict whether the label is 0 or 1 using learned logistic regression.
    Computes the predictions for X using a threshold at 0.5 
Exemplo n.º 3
0
f = open("ex2data2.txt", "r")
for line in f.readlines():
    data2.append(line.split("\n")[0].split(","))

data2 = np.array(data2, dtype='float64')

f.close()

X = data2[:, :-1]
y = data2[:, -1]

pos = X[y == 1]
neg = X[y == 0]

plt.figure(figsize=(8, 6))
plt.plot(pos[:, 0], pos[:, 1], 'rx', label="Class 1", markersize=6)
plt.plot(neg[:, 0], neg[:, 1], 'k+', label="Class 2", markersize=6)
plt.title("Data")
plt.xlabel("X1")
plt.ylabel("Y1")
plt.legend()
plt.show()

model = Log_reg(data2, poly_features=6, lamda=1)
model.train()
print(model.theta)
plotDecisionBoundary(plotData, theta, map_feature(X), y)

p = model.predict(X)
print("Accuracy is {}%".format(np.mean(p == y) * 100))