Skip to content
/ pyMLMVN Public

Python implementation of MLMVN model with hardware acceleration via PyCUDA, GUI via Qt

Notifications You must be signed in to change notification settings

MLMVN/pyMLMVN

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 

Repository files navigation

pyMLMVN - a simulator for MLMVN in Python

Note - Please do not confuse with MiraHead’s excellent but unaffiliated PyMLMVN

Goals

  • Implement a functional simulator for Multi-Layered neural networks with Multi-Valued Neurons (MLMVN)
  • Leverage the computational power of general-purpose graphics processing units (GPGPUs) to decrease simulator run-time
  • Provide a convenient interface to the MLMVN that is conducive to programming multiple simulations & experiments
  • Allow users to manage several network configurations and simulation instances with an easy-to-use GUI
  • Provide a variety of run-time metrics to monitor network performance

System Requirements

Core Requirements

CUDA Requirements

GUI Requirements

Basic Use - Training a network

import mlmvn
testNet = mlmvn.network(PARAMETERS HERE)
testNet.learn()

The above is a simplified example of how to invoke the learning algorithm of MLMVN from a Python interpreter. As of this writing, the mlmvn module contains a single object class called network. Please see the network parameters section for a complete list of required simulator arguments.

network Simulator Commands

  1. learn(): invokes the learning algorithm, using the simulation parameters passed during network initialization
  2. test(): test classification of network using current weights
  3. filter(): calculates and returns network outputs without calculating global error
  4. exportWeights(outputFile=None): exports the current network weights to a .mat file, file name specified either at network initialization or as a function argument

network Initialization Parameter List

  1. inputName: string or numpy.ndarray, dataset used for the supervised learning algorithm
  • a string will be interpreted as a path for a .txt file and will be imported with the np.loadtxt method
  • argument is assumed to contain both input and desired output values
  1. outputName: string, filename to be used to save current network weights
  2. netSize: list of integers, topology for the MLMVN
  • For example, a network with 3 hidden layers of 100 neurons each, and an output layer with 6 neurons, will need the argument [100, 100, 100, 6]
  1. discInput: boolean, denotes whether the passed input values are discrete or continuous (non-rational) numbers
  2. discOutput: ditto, but for the passed output values
  3. globalThresh: integer/float value, minimum threshold classification error value for the network over all learning samples
  4. localThresh: ditto, but for the per-sample network classification error
  5. sectors: integer, number of equidistant sectors to divide the complex unit circle into, discrete outputs only
  • in general, this should be equal to the number of unique classification features from the learning set, although you may need to experiment to get optimal results
  1. weightKey: string, determines the initial network weights. Acceptable arguments are:
  • ‘random’: network will pseudo-randomize the network weights with the normal distribution based on the provided topology
  • Ends with’.mat’: network will assume a path to a MATLAB file containing a cell array, the dimensions of which match exactly those of the provided topology (this may change in the future)
  1. stopKey: string, denotes the type of statistical global network error that is used for the learning algorithm. Acceptable arguments are:
  • ’error’: simple error rate, calculated as a percentage of the incorrect network outputs
  • ’max’ : maximum number of absolute network errors
  • ’mse’ : mean square error
  • ’rmse’ : root mean square error
  • ’armse’ : angular root mean square error, used only for continuous desired output values
  1. softMargins: boolean, determines whether the soft margins method is used, angular rmse only. As of this writing, it doesn’t really do anything, so just leave it to its default value, True.
  2. cuda: boolean, setting to True will make the simulator use GPGPU acceleration via the CUDA framework
  3. iterLimit: integer, determines the maximum number of iterations the simulator will run.
  • set to None (default) or 0 to disable
  1. refreshLimit: integer, determines how many iterations before the simulator prints an update of the global error
  • set to None or 0 to disable, default is 1 (updates every iteration)
  1. timeLimit: integer, number of seconds (by system clock) the simulator’s learning algorithm will run before quitting

Project Status

  • Core
    • Learning
    • Testing
    • Filtering
    • Weights import/export
    • Global error calculation
      • MSE & RMSE
      • Error Rate (not tested)
      • Max Absolute Error (not tested)
      • Angular RMSE (not tested)
    • Soft margins for ARMSE
    • Batch learning
    • Documentation (incomplete)
  • GUI
    • Basic interface
    • Management of multiple simulations
    • Import/export of configuration files (not tested)
    • Error monitoring (works, broken for multiple simulations)
    • Iteration limit on training
    • Time limit on training
  • GPU Acceleration (works, could always use work)

About

Python implementation of MLMVN model with hardware acceleration via PyCUDA, GUI via Qt

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages