コード例 #1
0
ファイル: update.py プロジェクト: parenthetical-e/annclass
def linear_update(xs, ws, y, ep):
    """ Update rules for linear ANNs using the delta rule,
    i.e., delta = ep * (xs (h - y)), where h is hypothesis/activation
    (from annclass.activation.linear()).
    
    Note: in multilayer ANN an average of two good weights may be a bad 
    set of weights, i.e. they are not convex.  Unlike perceptrons, whose
    weights are proven to reach a 'good' set of weights, linear (or multilayer)
    ANNs are proven instead to have their predictions approach the target. Also 
    note that perceptrons are not proven to have their predictions approach the 
    target, yet somehow the weights are known-good.  An odd state of affairs.
    
    FYI, Linear neurons are linear filters in EE (with ep being the severity
    of filtration).  For linear cases and squared error functions there are of 
    course analytical solutions for the update rather than the 
    iterative procedure used here (e.g. OLS).  Iterative methods are used 
    so non-linear functions and non-squared error functions can be used, 
    and to better mimic the brain.
    """
    
    h = linear(xs, ws, None)
    delta_w = delta(xs, y, h, ep)  ## delta loss function
    ws_new = ws + delta_w
    
    # Return the new wieghts and the new guess.
    return ws_new, linear(xs, ws_new, None)
コード例 #2
0
ファイル: test.py プロジェクト: parenthetical-e/annclass
def print_linear_update(xs, ws, bias, y, ep):
    """ Prints initial and updated ws, and h. """
    
    # Convert xs ans ws to the proper form
    # for use in perceptron
    xs = np.array(xs)
    ws = np.array(ws) 
        # Not using setup_bias so do this by hand.
    
    h = linear(xs, ws, None) ## The inital hypothesis
    
    ws_new, h_new = linear_update(xs, ws, y, ep)
    
    print("ws_intial: {0}\nws_new: {1}".format(ws, ws_new))
    print("h_intial: {0}\nh_new: {1}".format(h,h_new))