def main(): # Let's be very verbose! logging.basicConfig(level = logging.INFO) # Let's do multiprocessing this time with a lock (which is default) env = Environment(trajectory='Example_07_BRIAN', filename='experiments/example_07/HDF5/example_07.hdf5', file_title='Example_07_Euler_Integration', log_folder='experiments/example_07/LOGS/', comment = 'Go Brian!', dynamically_imported_classes=[BrianMonitorResult, BrianParameter], multiproc=True, wrap_mode='QUEUE', ncores=2) traj = env.v_trajectory # 1st a) add the parameters add_params(traj) # 1st b) prepare, we want to explore the different network sizes and different tauw time scales traj.f_explore(cartesian_product({traj.f_get('N').v_full_name:[50,60], traj.f_get('tauw').v_full_name:[30*ms,40*ms]})) # 2nd let's run our experiment env.f_run(run_net)
def main(): # Let's be very verbose! logging.basicConfig(level = logging.INFO) # Let's do multiprocessing this time with a lock (which is default) filename = os.path.join('hdf5', 'example_07.hdf5') env = Environment(trajectory='Example_07_BRIAN', filename=filename, file_title='Example_07_Brian', comment = 'Go Brian!', dynamically_imported_classes=[BrianMonitorResult, BrianParameter], multiproc=True, wrap_mode='QUEUE', ncores=2) traj = env.trajectory # 1st a) add the parameters add_params(traj) # 1st b) prepare, we want to explore the different network sizes and different tauw time scales traj.f_explore(cartesian_product({traj.f_get('N').v_full_name:[50,60], traj.f_get('tauw').v_full_name:[30*ms,40*ms]})) # 2nd let's run our experiment env.run(run_net) # You can take a look at the results in the hdf5 file if you want! # Finally disable logging and close all log-files env.disable_logging()
def main(): try: # Create an environment that handles running env = Environment(trajectory='Example1_Quick_And_Not_So_Dirty',filename='experiments/example_01/HDF5/', file_title='Example1_Quick_And_Not_So_Dirty', log_folder='experiments/example_01/LOGS/', comment='The first example!', complib='blosc', small_overview_tables=False, git_repository='./', git_message='Im a message!', sumatra_project='./', sumatra_reason='Testing!') # Get the trajectory from the environment traj = env.v_trajectory # Add both parameters traj.f_add_parameter('x', 1, comment='Im the first dimension!') traj.f_add_parameter('y', 1, comment='Im the second dimension!') # Explore the parameters with a cartesian product: traj.f_explore(cartesian_product({'x':[1,2,3], 'y':[6,7,8]})) # Run the simulation env.f_run(multiply) print("Python git test successful") # traj.f_expand({'x':[3,3],'y':[42,43]}) # # env.f_run(multiply) except Exception as e: print(repr(e)) sys.exit(1)
def test_time_display_of_loading(self): filename = make_temp_dir('sloooow.hdf5') env = Environment(trajectory='traj', add_time=True, filename=filename, log_stdout=False, log_config=get_log_config(), dynamic_imports=SlowResult, display_time=0.1) traj = env.v_traj res=traj.f_add_result(SlowResult, 'iii', 42, 43, comment='llk') traj.f_store() service_logger = traj.v_storage_service._logger root = logging.getLogger('pypet') old_level = root.level service_logger.setLevel(logging.INFO) root.setLevel(logging.INFO) traj.f_load(load_data=3) service_logger.setLevel(old_level) root.setLevel(old_level) path = get_log_path(traj) mainfilename = os.path.join(path, 'LOG.txt') with open(mainfilename, mode='r') as mainf: full_text = mainf.read() self.assertTrue('nodes/s)' in full_text) env.f_disable_logging()
def run_experiments(): logging.basicConfig(level = logging.INFO) logfolder = os.path.join(tempfile.gettempdir(), TEMPDIR, 'logs') pathfolder = os.path.join(tempfile.gettempdir(), TEMPDIR, 'hdf5') exponents = np.arange(0, 8, 1) res_per_run = 100 traj_names = [] filenames = [] runs = (np.ones(len(exponents))*2) ** exponents for adx, nruns in enumerate(runs): env = Environment(log_folder=logfolder, filename=pathfolder, ncores=2, multiproc=True, use_pool=True, wrap_mode='QUEUE') traj = env.v_trajectory traj.f_add_parameter('res_per_run', res_per_run) traj.f_add_parameter('trial', 0) traj.f_explore({'trial': list(range(int(nruns)))}) env.f_run(add_data) traj_names.append(traj.v_name) filenames.append(traj.v_storage_service.filename) return filenames, traj_names, pathfolder
def main(): filename = os.path.join('hdf5', 'Clustered_Network.hdf5') env = Environment(trajectory='Clustered_Network', add_time=False, filename=filename, continuable=False, lazy_debug=False, multiproc=True, ncores=2, use_pool=False, # We cannot use a pool, our network cannot be pickled wrap_mode='QUEUE', overwrite_file=True) #Get the trajectory container traj = env.trajectory # We introduce a `meta` parameter that we can use to easily rescale our network scale = 0.5 # To obtain the results from the paper scale this to 1.0 # Be aware that your machine will need a lot of memory then! traj.f_add_parameter('simulation.scale', scale, comment='Meta parameter that can scale default settings. ' 'Rescales number of neurons and connections strenghts, but ' 'not the clustersize.') # We create a Manager and pass all our components to the Manager. # Note the order, CNNeuronGroups are scheduled before CNConnections, # and the Fano Factor computation depends on the CNMonitorAnalysis clustered_network_manager = NetworkManager(network_runner=CNNetworkRunner(), component_list=(CNNeuronGroup(), CNConnections()), analyser_list=(CNMonitorAnalysis(),CNFanoFactorComputer())) # Add original parameters (but scaled according to `scale`) clustered_network_manager.add_parameters(traj) # We need `tolist` here since our parameter is a python float and not a # numpy float. explore_list = np.arange(1.0, 2.6, 0.2).tolist() # Explore different values of `R_ee` traj.f_explore({'R_ee' : explore_list}) # Pre-build network components clustered_network_manager.pre_build(traj) # Run the network simulation traj.f_store() # Let's store the parameters already before the run env.run(clustered_network_manager.run_network) # Finally disable logging and close all log-files env.disable_logging()
def test_file_overwriting(self): self.traj.f_store() with pt.open_file(self.filename, mode='r') as file: nchildren = len(file.root._v_children) self.assertTrue(nchildren > 0) env2 = Environment(filename=self.filename, log_config=get_log_config()) traj2 = env2.v_trajectory traj2.f_store() self.assertTrue(os.path.exists(self.filename)) with pt.open_file(self.filename, mode='r') as file: nchildren = len(file.root._v_children) self.assertTrue(nchildren > 1) env3 = Environment(filename=self.filename, overwrite_file=True, log_config=get_log_config()) self.assertFalse(os.path.exists(self.filename)) env2.f_disable_logging() env3.f_disable_logging()
def test_expand_after_reload(self): self.traj.f_add_parameter('TEST', 'test_expand_after_reload') ###Explore self.explore(self.traj) self.make_run() traj_name = self.traj.v_name self.env = Environment(trajectory=self.traj, log_stdout=False, log_config=get_log_config()) self.traj = self.env.v_trajectory self.traj.f_load(name=traj_name) self.traj.res.f_remove() self.traj.dpar.f_remove() self.expand() get_root_logger().info('\n $$$$$$$$$$$$ Second Run $$$$$$$$$$ \n') self.make_run() newtraj = self.load_trajectory(trajectory_name=self.traj.v_name,as_new=False) self.traj.f_load_skeleton() self.traj.f_load_items(self.traj.f_to_dict().keys(), only_empties=True) self.compare_trajectories(self.traj, newtraj)
def make_environment(self, idx, filename, continuable=True, delete_continue=False): #self.filename = '../../experiments/tests/HDF5/test.hdf5' self.logfolder = make_temp_dir( os.path.join('experiments', 'tests', 'Log')) self.cnt_folder = make_temp_dir( os.path.join('experiments', 'tests', 'cnt')) trajname = 'Test%d' % idx + '_' + make_trajectory_name(self) env = Environment(trajectory=trajname, filename=filename, file_title=trajname, log_stdout=False, log_config=get_log_config(), continuable=continuable, continue_folder=self.cnt_folder, delete_continue=delete_continue, large_overview_tables=True) self.envs.append(env) self.trajs.append(env.v_trajectory)
def test_make_default_file_when_giving_directory_without_slash(self): filename = make_temp_dir('test.hdf5') head, tail = os.path.split(filename) env = Environment(filename=head) the_file_name = env.v_traj.v_name + '.hdf5' head, tail = os.path.split(env.v_traj.v_storage_service.filename) self.assertEqual(tail, the_file_name)
def setUp(self): self.set_mode() self.filename = make_temp_dir(os.path.join('experiments','tests','HDF5','sort_tests.hdf5')) self.trajname = make_trajectory_name(self) env = Environment(trajectory=self.trajname,filename=self.filename, file_title=self.trajname, log_stdout=self.log_stdout, log_config=get_log_config() if self.log_config else None, multiproc=self.multiproc, wrap_mode=self.mode, ncores=self.ncores, use_pool=self.use_pool, use_scoop=self.use_scoop, port=self.port, freeze_input=self.freeze_input, graceful_exit=self.graceful_exit) traj = env.v_trajectory traj.v_standard_parameter=Parameter traj.f_add_parameter('x',99) traj.f_add_parameter('y',99) self.env=env self.traj=traj
def test_expand_after_reload(self): self.traj.f_add_parameter('TEST', 'test_expand_after_reload') ###Explore self.explore(self.traj) self.make_run() traj_name = self.traj.v_name traj_name = self.traj.v_name self.env = Environment(trajectory=self.traj,filename=self.filename, file_title=self.trajname, log_folder=self.logfolder, log_stdout=False) self.traj = self.env.v_trajectory self.traj.f_load(name=traj_name) self.expand() print '\n $$$$$$$$$$$$ Second Run $$$$$$$$$$ \n' self.make_run() newtraj = self.load_trajectory(trajectory_name=self.traj.v_name,as_new=False) self.traj.f_update_skeleton() self.traj.f_load_items(self.traj.f_to_dict().keys(), only_empties=True) self.compare_trajectories(self.traj, newtraj)
def make_environment_mp(self, idx, filename): #self.filename = '../../experiments/tests/HDF5/test.hdf5' self.logfolder = make_temp_dir(os.path.join('experiments', 'tests', 'Log')) self.cnt_folder = make_temp_dir(os.path.join('experiments', 'tests', 'cnt')) trajname = 'Test%d' % idx + '_' + make_trajectory_name(self) env = Environment(trajectory=trajname, dynamic_imports=[CustomParameter], filename=filename, file_title=trajname, log_stdout=False, purge_duplicate_comments=False, log_config=get_log_config(), continuable=True, continue_folder=self.cnt_folder, delete_continue=False, multiproc=True, use_pool=True, ncores=4) self.envs.append(env) self.trajs.append( env.v_trajectory)
def setUp(self): env = Environment( trajectory='Test_' + repr(time.time()).replace('.', '_'), filename=make_temp_dir( os.path.join('experiments', 'tests', 'briantests', 'HDF5', 'briantest.hdf5')), file_title='test', log_config=get_log_config(), dynamic_imports=[ 'pypet.brian.parameter.BrianParameter', BrianMonitorResult ], multiproc=False) traj = env.v_trajectory #env._set_standard_storage() #env._hdf5_queue_writer._hdf5storageservice = LazyStorageService() traj = env.v_trajectory #traj.set_storage_service(LazyStorageService()) add_params(traj) #traj.mode='Parallel' traj.f_explore( cartesian_product({ traj.f_get('N').v_full_name: [50, 60], traj.f_get('tauw').v_full_name: [30 * ms, 40 * ms] })) self.traj = traj self.env = env self.traj = traj
def test_expand(self): ###Explore self.explore(self.traj) results = self.env.f_run(multiply) self.are_results_in_order(results) get_root_logger().info(results) traj = self.traj self.assertEqual(len(traj), len(list(compat.listvalues(self.explore_dict)[0]))) self.traj.f_load_skeleton() self.traj.f_load_items(self.traj.f_to_dict().keys(), only_empties=True) self.check_if_z_is_correct(traj) traj_name = self.env.v_trajectory.v_name del self.env self.env = Environment(trajectory=self.traj, log_stdout=False, log_config=get_log_config()) self.traj = self.env.v_trajectory self.traj.f_load(name=traj_name) self.expand(self.traj) results = self.env.f_run(multiply) self.are_results_in_order(results) traj = self.traj self.assertTrue( len(traj) == len(compat.listvalues(self.expand_dict)[0]) + len(compat.listvalues(self.explore_dict)[0])) self.traj.f_load_skeleton() self.traj.f_load_items(self.traj.f_to_dict().keys(), only_empties=True) self.check_if_z_is_correct(traj) newtraj = self.load_trajectory(trajectory_name=self.traj.v_name, as_new=False) self.traj.f_load_skeleton() self.traj.f_load_items(self.traj.f_to_dict().keys(), only_empties=True) self.compare_trajectories(self.traj, newtraj)
def _make_env(self, idx): return Environment(trajectory=self.trajname + str(idx), filename=self.filename, file_title=self.trajname, log_stdout=False, log_config=get_log_config(), multiproc=self.multiproc, wrap_mode=self.mode, ncores=self.ncores)
def setUp(self): self.multiproc = True self.mode = 'LOCK' self.trajname = make_trajectory_name(self) self.filename = make_temp_dir( os.path.join('experiments', 'tests', 'HDF5', '%s.hdf5' % self.trajname)) self.logfolder = make_temp_dir( os.path.join('experiments', 'tests', 'Log')) random.seed() cap_dicts = ( dict(cpu_cap=0.000001), # Ensure that these are triggered dict(memory_cap=(0.000001, 150.0)), dict(swap_cap=0.000001, )) cap_dict = cap_dicts[CapTest.cap_count] env = Environment(trajectory=self.trajname, filename=self.filename, file_title=self.trajname, log_folder=self.logfolder, logger_names=('pypet', 'test'), log_levels='ERROR', log_stdout=False, results_per_run=5, derived_parameters_per_run=5, multiproc=True, ncores=4, use_pool=False, niceness=check_nice(11), **cap_dict) logging.getLogger('test').error('Using Cap: %s and file: %s' % (str(cap_dict), str(self.filename))) # Loop through all possible cap configurations # and test one at a time CapTest.cap_count += 1 CapTest.cap_count = CapTest.cap_count % len(cap_dicts) traj = env.v_trajectory ## Create some parameters self.param_dict = {} create_param_dict(self.param_dict) ### Add some parameter: add_params(traj, self.param_dict) #remember the trajectory and the environment self.traj = traj self.env = env
def setUp(self): self.set_mode() self.logfolder = make_temp_dir(os.path.join('experiments', 'tests', 'Log')) random.seed() self.trajname = make_trajectory_name(self) self.filename = make_temp_dir(os.path.join('experiments', 'tests', 'HDF5', 'test%s.hdf5' % self.trajname)) env = Environment(trajectory=self.trajname, filename=self.filename, file_title=self.trajname, log_stdout=self.log_stdout, log_config=get_log_config(), results_per_run=5, wildcard_functions=self.wildcard_functions, derived_parameters_per_run=5, multiproc=self.multiproc, ncores=self.ncores, wrap_mode=self.mode, use_pool=self.use_pool, gc_interval=self.gc_interval, freeze_input=self.freeze_input, fletcher32=self.fletcher32, complevel=self.complevel, complib=self.complib, shuffle=self.shuffle, pandas_append=self.pandas_append, pandas_format=self.pandas_format, encoding=self.encoding, niceness=self.niceness, use_scoop=self.use_scoop, port=self.port, add_time=self.add_time, timeout=self.timeout, graceful_exit=self.graceful_exit) traj = env.v_trajectory traj.v_standard_parameter=Parameter ## Create some parameters self.param_dict={} create_param_dict(self.param_dict) ### Add some parameter: add_params(traj,self.param_dict) #remember the trajectory and the environment self.traj = traj self.env = env
def test_errors(self): tmp = make_temp_dir('cont') if dill is not None: env1 = Environment(continuable=True, continue_folder=tmp, log_config=None, filename=self.filename) with self.assertRaises(ValueError): env1.f_run_map(multiply_args, [1], [2], [3]) with self.assertRaises(ValueError): Environment(multiproc=True, use_pool=False, freeze_input=True, filename=self.filename, log_config=None) env3 = Environment(log_config=None, filename=self.filename) with self.assertRaises(ValueError): env3.f_run_map(multiply_args) with self.assertRaises(ValueError): Environment(use_scoop=True, immediate_postproc=True) with self.assertRaises(ValueError): Environment(use_pool=True, immediate_postproc=True) with self.assertRaises(ValueError): Environment(continuable=True, wrap_mode='QUEUE', continue_folder=tmp) with self.assertRaises(ValueError): Environment(use_scoop=True, wrap_mode='QUEUE') with self.assertRaises(ValueError): Environment(automatic_storing=False, continuable=True, continue_folder=tmp) with self.assertRaises(ValueError): Environment(port='www.nosi.de', wrap_mode='LOCK')
def test_file_overwriting(self): self.traj.f_store() with ptcompat.open_file(self.filename, mode='r') as file: nchildren = len(file.root._v_children) self.assertTrue(nchildren > 0) env2 = Environment(filename=self.filename, log_config=get_log_config()) traj2 = env2.v_trajectory traj2.f_store() self.assertTrue(os.path.exists(self.filename)) with ptcompat.open_file(self.filename, mode='r') as file: nchildren = len(file.root._v_children) self.assertTrue(nchildren > 1) env3 = Environment(filename=self.filename, overwrite_file=True, log_config=get_log_config()) self.assertFalse(os.path.exists(self.filename)) env2.f_disable_logging() env3.f_disable_logging()
def main(): # Let's be very verbose! logging.basicConfig(level=logging.INFO) # Let's do multiprocessing this time with a lock (which is default) filename = os.path.join('hdf5', 'example_23.hdf5') env = Environment( trajectory='Example_23_BRIAN2', filename=filename, file_title='Example_23_Brian2', comment='Go Brian2!', dynamically_imported_classes=[Brian2MonitorResult, Brian2Parameter]) traj = env.trajectory # 1st a) add the parameters add_params(traj) # 1st b) prepare, we want to explore the different network sizes and different tauw time scales traj.f_explore( cartesian_product({ traj.f_get('N').v_full_name: [50, 60], traj.f_get('tauw').v_full_name: [30 * ms, 40 * ms] })) # 2nd let's run our experiment env.run(run_net) # You can take a look at the results in the hdf5 file if you want! # Finally disable logging and close all log-files env.disable_logging()
def profile_single_storing(profile_stroing=False, profile_loading=True): logging.basicConfig(level = logging.INFO) logfolder = os.path.join(tempfile.gettempdir(), TEMPDIR, 'logs') pathfolder = os.path.join(tempfile.gettempdir(), TEMPDIR, 'hdf5') res_per_run = 100 env = Environment(log_folder=logfolder, filename=pathfolder, ncores=2, multiproc=False, use_pool=True, wrap_mode='QUEUE') traj = env.v_trajectory traj.f_add_parameter('res_per_run', res_per_run) traj.f_add_parameter('trial', 0) traj.f_explore({'trial':list(range(10))}) runexp = lambda : env.f_run(add_data) if profile_stroing: cProfile.runctx('runexp()', {'runexp': runexp},globals(), sort=1, filename='store_stats.profile') else: runexp() print('########################################################################') traj = Trajectory(name=traj.v_name, add_time=False, filename= traj.v_storage_service.filename) load = lambda : traj.f_load(load_parameters=2, load_results=1) if profile_loading: cProfile.runctx('load()', {'load': load},globals(), filename='load_stats.profile', sort=1)
def make_environment(self, idx, filename, **kwargs): #self.filename = make_temp_dir('experiments/tests/HDF5/test.hdf5') logfolder = make_temp_dir(os.path.join('experiments','tests','Log')) trajname = make_trajectory_name(self) + '__' +str(idx) +'_' env = Environment(trajectory=trajname,filename=filename, file_title=trajname, log_stdout=False, large_overview_tables=True, log_config=get_log_config(), **kwargs) self.envs.append(env) self.trajs.append( env.v_trajectory)
def test_full_store(self): filename = make_temp_dir('full_store.hdf5') with Environment(filename=filename, log_config=get_log_config()) as env: traj = env.v_trajectory traj.par.x = Parameter('x', 3, 'jj') traj.f_explore({'x': [1,2,3]}) env.f_run(add_one_particular_item, True) traj = load_trajectory(index=-1, filename=filename) self.assertTrue('hi' in traj)
def main(): filename = os.path.join('hdf5', 'Clustered_Network.hdf5') env = Environment(trajectory='Clustered_Network', add_time=False, filename=filename, continuable=False, lazy_debug=False, multiproc=True, ncores=2, use_pool=False, # We cannot use a pool, our network cannot be pickled wrap_mode='QUEUE', overwrite_file=True) #Get the trajectory container traj = env.v_trajectory # We introduce a `meta` parameter that we can use to easily rescale our network scale = 1.0 # To obtain the results from the paper scale this to 1.0 # Be aware that your machine will need a lot of memory then! traj.f_add_parameter('simulation.scale', scale, comment='Meta parameter that can scale default settings. ' 'Rescales number of neurons and connections strenghts, but ' 'not the clustersize.') # We create a Manager and pass all our components to the Manager. # Note the order, CNNeuronGroups are scheduled before CNConnections, # and the Fano Factor computation depends on the CNMonitorAnalysis clustered_network_manager = NetworkManager(network_runner=CNNetworkRunner(), component_list=(CNNeuronGroup(), CNConnections()), analyser_list=(CNMonitorAnalysis(),CNFanoFactorComputer())) # Add original parameters (but scaled according to `scale`) clustered_network_manager.add_parameters(traj) # We need `tolist` here since our parameter is a python float and not a # numpy float. explore_list = np.arange(1.0, 2.6, 0.2).tolist() # Explore different values of `R_ee` traj.f_explore({'R_ee' : explore_list}) # Pre-build network components clustered_network_manager.pre_build(traj) # Run the network simulation traj.f_store() # Let's store the parameters already before the run env.f_run(clustered_network_manager.run_network) # Finally disable logging and close all log-files env.f_disable_logging()
def setUp(self): self.set_mode() logging.basicConfig(level=logging.ERROR) self.logfolder = make_temp_dir( os.path.join('experiments', 'tests', 'Log')) random.seed() self.trajname = make_trajectory_name(self) self.filename = make_temp_dir( os.path.join('experiments', 'tests', 'HDF5', 'test%s.hdf5' % self.trajname)) env = Environment(trajectory=self.trajname, filename=self.filename, file_title=self.trajname, log_stdout=self.log_stdout, log_config=get_log_config(), results_per_run=5, derived_parameters_per_run=5, multiproc=self.multiproc, ncores=self.ncores, wrap_mode=self.mode, use_pool=self.use_pool, fletcher32=self.fletcher32, complevel=self.complevel, complib=self.complib, shuffle=self.shuffle, pandas_append=self.pandas_append, pandas_format=self.pandas_format, encoding=self.encoding) traj = env.v_trajectory traj.v_standard_parameter = Parameter ## Create some parameters create_link_params(traj) ### Add some parameter: explore_params(traj) #remember the trajectory and the environment self.traj = traj self.env = env
def test_expand(self): ###Explore self.explore(self.traj) results = self.env.f_run(multiply) self.are_results_in_order(results) get_root_logger().info(results) traj = self.traj self.assertEqual(len(traj), len(list(compat.listvalues(self.explore_dict)[0]))) self.traj.f_load_skeleton() self.traj.f_load_items(self.traj.f_to_dict().keys(), only_empties=True) self.check_if_z_is_correct(traj) traj_name = self.env.v_trajectory.v_name del self.env self.env = Environment(trajectory=self.traj, log_stdout=False, log_config=get_log_config()) self.traj = self.env.v_trajectory self.traj.f_load(name=traj_name) self.expand(self.traj) results = self.env.f_run(multiply) self.are_results_in_order(results) traj = self.traj self.assertTrue(len(traj) == len(compat.listvalues(self.expand_dict)[0])+ len(compat.listvalues(self.explore_dict)[0])) self.traj.f_load_skeleton() self.traj.f_load_items(self.traj.f_to_dict().keys(), only_empties=True) self.check_if_z_is_correct(traj) newtraj = self.load_trajectory(trajectory_name=self.traj.v_name,as_new=False) self.traj.f_load_skeleton() self.traj.f_load_items(self.traj.f_to_dict().keys(), only_empties=True) self.compare_trajectories(self.traj,newtraj)
def setUp(self): super(TestMergeResultsSort,self).setUp() env2 = Environment(trajectory=self.trajname+'2',filename=self.filename, file_title=self.trajname, log_stdout=False, log_config=get_log_config(), multiproc=self.multiproc, wrap_mode=self.mode, ncores=self.ncores) traj2 = env2.v_trajectory traj2.v_standard_parameter=Parameter traj2.f_add_parameter('x',0) traj2.f_add_parameter('y',0) self.env2=env2 self.traj2=traj2
def make_environment(self, filename, trajname='Test', log=True, **kwargs): #self.filename = '../../experiments/tests/HDF5/test.hdf5' filename = make_temp_dir(filename) logfolder = make_temp_dir(os.path.join('experiments', 'tests', 'Log')) cntfolder = make_temp_dir(os.path.join('experiments', 'tests', 'cnt')) if log: log_config = get_log_config() else: log_config = None env = Environment( trajectory=trajname, # log_levels=logging.INFO, # log_config=None, log_config=log_config, dynamic_imports=[CustomParameter], filename=filename, log_stdout=False, **self.env_kwargs) return env, filename, logfolder, cntfolder
def test_expand(self): ###Explore self.explore(self.traj) print self.env.f_run(multiply) traj = self.traj self.assertTrue(len(traj) == len(self.explore_dict.values()[0])) self.traj.f_update_skeleton() self.traj.f_load_items(self.traj.f_to_dict().keys(), only_empties=True) self.check_if_z_is_correct(traj) traj_name = self.env.v_trajectory.v_name del self.env self.env = Environment(trajectory=self.traj,filename=self.filename, file_title=self.trajname, log_folder=self.logfolder, log_stdout=False) self.traj = self.env.v_trajectory self.traj.f_load(name=traj_name) self.expand(self.traj) self.env.f_run(multiply) traj = self.traj self.assertTrue(len(traj) == len(self.expand_dict.values()[0])+ len(self.explore_dict.values()[0])) self.traj.f_update_skeleton() self.traj.f_load_items(self.traj.f_to_dict().keys(), only_empties=True) self.check_if_z_is_correct(traj) newtraj = self.load_trajectory(trajectory_name=self.traj.v_name,as_new=False) self.traj.f_update_skeleton() self.traj.f_load_items(self.traj.f_to_dict().keys(), only_empties=True) self.compare_trajectories(self.traj,newtraj)
class ResultSortTest(TrajectoryComparator): def set_mode(self): self.mode = 'LOCK' self.multiproc = False self.ncores = 1 self.use_pool=True def setUp(self): self.set_mode() logging.basicConfig(level = logging.INFO) self.filename = make_temp_file('experiments/tests/HDF5/test.hdf5') self.logfolder = make_temp_file('experiments/tests/Log') self.trajname = make_trajectory_name(self) env = Environment(trajectory=self.trajname,filename=self.filename, file_title=self.trajname, log_folder=self.logfolder, log_stdout=False, multiproc=self.multiproc, wrap_mode=self.mode, ncores=self.ncores, use_pool=self.use_pool) traj = env.v_trajectory traj.v_standard_parameter=Parameter traj.f_add_parameter('x',0) traj.f_add_parameter('y',0) self.env=env self.traj=traj def load_trajectory(self,trajectory_index=None,trajectory_name=None,as_new=False): ### Load The Trajectory and check if the values are still the same newtraj = Trajectory() newtraj.v_storage_service=HDF5StorageService(filename=self.filename) newtraj.f_load(name=trajectory_name, load_derived_parameters=2,load_results=2, index=trajectory_index, as_new=as_new) return newtraj def explore(self,traj): self.explore_dict={'x':[0,1,2,3,4],'y':[1,1,2,2,3]} traj.f_explore(self.explore_dict) def expand(self,traj): self.expand_dict={'x':[10,11,12,13],'y':[11,11,12,12]} traj.f_expand(self.expand_dict) def test_if_results_are_sorted_correctly(self): ###Explore self.explore(self.traj) self.env.f_run(multiply) traj = self.traj self.assertTrue(len(traj) == len(self.explore_dict.values()[0])) self.traj.f_update_skeleton() self.traj.f_load_items(self.traj.f_to_dict().keys(), only_empties=True) self.check_if_z_is_correct(traj) newtraj = self.load_trajectory(trajectory_name=self.traj.v_name,as_new=False) self.traj.f_update_skeleton() self.traj.f_load_items(self.traj.f_to_dict().keys(), only_empties=True) self.compare_trajectories(self.traj,newtraj) def test_f_iter_runs(self): ###Explore self.explore(self.traj) self.env.f_run(multiply) traj = self.traj self.assertTrue(len(traj) == len(self.explore_dict.values()[0])) self.traj.f_update_skeleton() self.traj.f_load_items(self.traj.f_to_dict().keys(), only_empties=True) self.check_if_z_is_correct(traj) newtraj = self.load_trajectory(trajectory_name=self.traj.v_name,as_new=False) self.traj.f_update_skeleton() self.traj.f_load_items(self.traj.f_to_dict().keys(), only_empties=True) for idx, run_name in enumerate(self.traj.f_iter_runs()): newtraj.v_as_run=run_name self.traj.v_as_run == run_name self.traj.v_idx = idx newtraj.v_idx = idx self.assertTrue('run_%08d' % (idx+1) not in traj) self.assertTrue(newtraj.z==traj.x*traj.y,' z != x*y: %s != %s * %s' % (str(newtraj.z),str(traj.x),str(traj.y))) self.assertTrue(traj.v_idx == -1) self.assertTrue(traj.v_as_run is None) self.assertTrue(newtraj.v_idx == idx) def test_expand(self): ###Explore self.explore(self.traj) print self.env.f_run(multiply) traj = self.traj self.assertTrue(len(traj) == len(self.explore_dict.values()[0])) self.traj.f_update_skeleton() self.traj.f_load_items(self.traj.f_to_dict().keys(), only_empties=True) self.check_if_z_is_correct(traj) traj_name = self.env.v_trajectory.v_name del self.env self.env = Environment(trajectory=self.traj,filename=self.filename, file_title=self.trajname, log_folder=self.logfolder, log_stdout=False) self.traj = self.env.v_trajectory self.traj.f_load(name=traj_name) self.expand(self.traj) self.env.f_run(multiply) traj = self.traj self.assertTrue(len(traj) == len(self.expand_dict.values()[0])+ len(self.explore_dict.values()[0])) self.traj.f_update_skeleton() self.traj.f_load_items(self.traj.f_to_dict().keys(), only_empties=True) self.check_if_z_is_correct(traj) newtraj = self.load_trajectory(trajectory_name=self.traj.v_name,as_new=False) self.traj.f_update_skeleton() self.traj.f_load_items(self.traj.f_to_dict().keys(), only_empties=True) self.compare_trajectories(self.traj,newtraj) def test_expand_after_reload(self): ###Explore self.explore(self.traj) self.env.f_run(multiply) traj = self.traj self.assertTrue(len(traj) == len(self.explore_dict.values()[0])) self.traj.f_update_skeleton() self.traj.f_load_items(self.traj.f_to_dict().keys(), only_empties=True) self.check_if_z_is_correct(traj) self.expand(self.traj) self.env.f_run(multiply) traj = self.traj self.assertTrue(len(traj) == len(self.expand_dict.values()[0])+ len(self.explore_dict.values()[0])) self.traj.f_update_skeleton() self.traj.f_load_items(self.traj.f_to_dict().keys(), only_empties=True) self.check_if_z_is_correct(traj) newtraj = self.load_trajectory(trajectory_name=self.traj.v_name,as_new=False) self.traj.f_update_skeleton() self.traj.f_load_items(self.traj.f_to_dict().keys(), only_empties=True) self.compare_trajectories(self.traj,newtraj) def check_if_z_is_correct(self,traj): for x in range(len(traj)): traj.v_idx=x self.assertTrue(traj.z==traj.x*traj.y,' z != x*y: %s != %s * %s' % (str(traj.z),str(traj.x),str(traj.y))) traj.v_idx=-1
def test_merge_with_linked_derived_parameter(self, disable_logging = True): logging.basicConfig(level = logging.ERROR) self.logfolder = make_temp_dir(os.path.join('experiments', 'tests', 'Log')) random.seed() self.trajname1 = 'T1'+ make_trajectory_name(self) self.trajname2 = 'T2'+make_trajectory_name(self) self.filename = make_temp_dir(os.path.join('experiments', 'tests', 'HDF5', 'test%s.hdf5' % self.trajname1)) self.env1 = Environment(trajectory=self.trajname1, filename=self.filename, file_title=self.trajname1, log_stdout=False, log_config=get_log_config()) self.env2 = Environment(trajectory=self.trajname2, filename=self.filename, file_title=self.trajname2, log_stdout=False, log_config=get_log_config()) self.traj1 = self.env1.v_trajectory self.traj2 = self.env2.v_trajectory create_link_params(self.traj1) create_link_params(self.traj2) explore_params(self.traj1) explore_params2(self.traj2) self.traj1.f_add_derived_parameter('test.$.gg', 42) self.traj2.f_add_derived_parameter('test.$.gg', 44) self.traj1.f_add_derived_parameter('test.hh.$', 111) self.traj2.f_add_derived_parameter('test.hh.$', 53) self.env1.f_run(dostuff_and_add_links) self.env2.f_run(dostuff_and_add_links) old_length = len(self.traj1) self.traj1.f_merge(self.traj2, remove_duplicates=True) self.traj1.f_load(load_data=2) for run in self.traj1.f_get_run_names(): self.traj1.v_crun = run idx = self.traj1.v_idx param = self.traj1['test.crun.gg'] if idx < old_length: self.assertTrue(param == 42) else: self.assertTrue(param == 44) param = self.traj1['test.hh.crun'] if idx < old_length: self.assertTrue(param == 111) else: self.assertTrue(param == 53) self.assertTrue(len(self.traj1) > old_length) for irun in range(len(self.traj1.f_get_run_names())): self.assertTrue(self.traj1.res['r_%d' % irun] == self.traj1.paramB) self.assertTrue(self.traj1.res.runs['r_%d' % irun].paraBL == self.traj1.paramB) if disable_logging: self.env1.f_disable_logging() self.env2.f_disable_logging() return old_length
class EnvironmentTest(TrajectoryComparator): def set_mode(self): self.mode = 'LOCK' self.multiproc = False self.ncores = 1 self.use_pool=True self.pandas_format='fixed' self.pandas_append=False self.complib = 'blosc' self.complevel=9 self.shuffle=True self.fletcher32 = False def explore_complex_params(self, traj): matrices_csr = [] for irun in range(3): spsparse_csr = spsp.csr_matrix((111,111)) spsparse_csr[3,2+irun] = 44.5*irun matrices_csr.append(spsparse_csr) matrices_csc = [] for irun in range(3): spsparse_csc = spsp.csc_matrix((111,111)) spsparse_csc[3,2+irun] = 44.5*irun matrices_csc.append(spsparse_csc) matrices_bsr = [] for irun in range(3): spsparse_bsr = spsp.csr_matrix((111,111)) spsparse_bsr[3,2+irun] = 44.5*irun matrices_bsr.append(spsparse_bsr.tobsr()) matrices_dia = [] for irun in range(3): spsparse_dia = spsp.csr_matrix((111,111)) spsparse_dia[3,2+irun] = 44.5*irun matrices_dia.append(spsparse_dia.todia()) self.explore_dict={'string':[np.array(['Uno', 'Dos', 'Tres']), np.array(['Cinco', 'Seis', 'Siette']), np.array(['Ocho', 'Nueve', 'Diez'])], 'int':[1,2,3], 'csr_mat' : matrices_csr, 'csc_mat' : matrices_csc, 'bsr_mat' : matrices_bsr, 'dia_mat' : matrices_dia, 'list' : [['fff'],[444444,444,44,4,4,4],[1,2,3,42]]} with self.assertRaises(pex.NotUniqueNodeError): traj.f_explore(self.explore_dict) self.explore_dict={'Numpy.string':[np.array(['Uno', 'Dos', 'Tres']), np.array(['Cinco', 'Seis', 'Siette']), np.array(['Ocho', 'Nueve', 'Diez'])], 'Normal.int':[1,2,3], 'csr_mat' : matrices_csr, 'csc_mat' : matrices_csc, 'bsr_mat' : matrices_bsr, 'dia_mat' : matrices_dia, 'list' : [['fff'],[444444,444,44,4,4,4],[1,2,3,42]]} traj.f_explore(self.explore_dict) def explore(self, traj): self.explored ={'Normal.trial': [0], 'Numpy.double': [np.array([1.0,2.0,3.0,4.0]), np.array([-1.0,3.0,5.0,7.0])], 'csr_mat' :[spsp.csr_matrix((2222,22)), spsp.csr_matrix((2222,22))]} self.explored['csr_mat'][0][1,2]=44.0 self.explored['csr_mat'][1][2,2]=33 traj.f_explore(cartesian_product(self.explored)) def explore_large(self, traj): self.explored ={'Normal.trial': [0,1]} traj.f_explore(cartesian_product(self.explored)) def setUp(self): self.set_mode() logging.basicConfig(level = logging.INFO) self.logfolder = make_temp_file('experiments/tests/Log') random.seed() self.trajname = make_trajectory_name(self) self.filename = make_temp_file('experiments/tests/HDF5/test%s.hdf5' % self.trajname) env = Environment(trajectory=self.trajname, filename=self.filename, file_title=self.trajname, log_folder=self.logfolder, log_stdout=False, results_per_run=5, derived_parameters_per_run=5, multiproc=self.multiproc, ncores=self.ncores, wrap_mode=self.mode, use_pool=self.use_pool, fletcher32=self.fletcher32, complevel=self.complevel, complib=self.complib, shuffle=self.shuffle, pandas_append=self.pandas_append, pandas_format=self.pandas_format) traj = env.v_trajectory traj.v_standard_parameter=Parameter ## Create some parameters self.param_dict={} create_param_dict(self.param_dict) ### Add some parameter: add_params(traj,self.param_dict) #remember the trajectory and the environment self.traj = traj self.env = env def make_run_large_data(self): self.env.f_run(add_large_data) def make_run(self): ### Make a test run simple_arg = -13 simple_kwarg= 13.0 self.env.f_run(simple_calculations,simple_arg,simple_kwarg=simple_kwarg) def test_a_large_run(self): self.traj.f_add_parameter('TEST', 'test_run') ###Explore self.explore_large(self.traj) self.make_run_large_data() newtraj = self.load_trajectory(trajectory_name=self.traj.v_name,as_new=False) self.traj.f_update_skeleton() self.traj.f_load_items(self.traj.f_to_dict().keys(), only_empties=True) self.compare_trajectories(self.traj,newtraj) def test_run(self): self.traj.f_add_parameter('TEST', 'test_run') ###Explore self.explore(self.traj) self.make_run() newtraj = self.load_trajectory(trajectory_name=self.traj.v_name,as_new=False) self.traj.f_update_skeleton() self.traj.f_load_items(self.traj.f_to_dict().keys(), only_empties=True) self.compare_trajectories(self.traj,newtraj) def test_run_complex(self): self.traj.f_add_parameter('TEST', 'test_run_complex') ###Explore self.explore_complex_params(self.traj) self.make_run() newtraj = self.load_trajectory(trajectory_name=self.traj.v_name,as_new=False) self.traj.f_update_skeleton() self.traj.f_load_items(self.traj.f_to_dict().keys(), only_empties=True) self.compare_trajectories(self.traj,newtraj) def load_trajectory(self,trajectory_index=None,trajectory_name=None,as_new=False): ### Load The Trajectory and check if the values are still the same newtraj = Trajectory() newtraj.v_storage_service=HDF5StorageService(filename=self.filename) newtraj.f_load(name=trajectory_name, load_parameters=2, load_derived_parameters=2,load_results=2, load_other_data=2, index=trajectory_index, as_new=as_new) return newtraj def test_expand(self): ###Explore self.traj.f_add_parameter('TEST', 'test_expand') self.explore(self.traj) self.make_run() self.expand() print '\n $$$$$$$$$$$$$$$$$ Second Run $$$$$$$$$$$$$$$$$$$$$$$$' self.make_run() newtraj = self.load_trajectory(trajectory_name=self.traj.v_name,as_new=False) self.traj.f_update_skeleton() self.traj.f_load_items(self.traj.f_to_dict().keys(), only_empties=True) self.compare_trajectories(self.traj,newtraj) def test_expand_after_reload(self): self.traj.f_add_parameter('TEST', 'test_expand_after_reload') ###Explore self.explore(self.traj) self.make_run() traj_name = self.traj.v_name traj_name = self.traj.v_name self.env = Environment(trajectory=self.traj,filename=self.filename, file_title=self.trajname, log_folder=self.logfolder, log_stdout=False) self.traj = self.env.v_trajectory self.traj.f_load(name=traj_name) self.expand() print '\n $$$$$$$$$$$$ Second Run $$$$$$$$$$ \n' self.make_run() newtraj = self.load_trajectory(trajectory_name=self.traj.v_name,as_new=False) self.traj.f_update_skeleton() self.traj.f_load_items(self.traj.f_to_dict().keys(), only_empties=True) self.compare_trajectories(self.traj, newtraj) def expand(self): self.expanded ={'Normal.trial': [1], 'Numpy.double': [np.array([1.0,2.0,3.0,4.0]), np.array([-1.0,3.0,5.0,7.0])], 'csr_mat' :[spsp.csr_matrix((2222,22)), spsp.csr_matrix((2222,22))]} self.expanded['csr_mat'][0][1,2]=44.0 self.expanded['csr_mat'][1][2,2]=33 self.traj.f_expand(cartesian_product(self.expanded)) ################## Overview TESTS ############################# def test_switch_ON_large_tables(self): self.traj.f_add_parameter('TEST', 'test_switch_off_LARGE_tables') ###Explore self.explore(self.traj) self.env.f_set_large_overview(True) self.make_run() hdf5file = pt.openFile(self.filename) overview_group = hdf5file.getNode(where='/'+ self.traj.v_name, name='overview') should_not = ['derived_parameters_runs', 'results_runs'] for name in should_not: self.assertTrue(name in overview_group, '%s in overviews but should not!' % name) hdf5file.close() self.traj.f_load(load_parameters=2, load_derived_parameters=2, load_results=2) newtraj = self.load_trajectory(trajectory_name=self.traj.v_name) self.compare_trajectories(newtraj,self.traj) def test_switch_off_all_tables(self): ###Explore self.traj.f_add_parameter('TEST', 'test_switch_off_ALL_tables') self.explore(self.traj) self.env.f_switch_off_all_overview() self.make_run() hdf5file = pt.openFile(self.filename) overview_group = hdf5file.getNode(where='/'+ self.traj.v_name, name='overview') should_not = HDF5StorageService.NAME_TABLE_MAPPING.keys() for name in should_not: name = name.split('.')[-1] # Get only the name of the table, no the full name self.assertTrue(not name in overview_group, '%s in overviews but should not!' % name) hdf5file.close() def test_store_form_tuple(self): self.traj.f_store() self.traj.f_add_result('TestResItem', 42, 43) with self.assertRaises(ValueError): self.traj.f_store_item((pypetconstants.LEAF, self.traj.TestResItem,(),{},5)) self.traj.f_store_item((pypetconstants.LEAF, self.traj.TestResItem)) self.traj.results.f_remove_child('TestResItem') self.assertTrue('TestResItem' not in self.traj) self.traj.results.f_load_child('TestResItem', load_data=pypetconstants.LOAD_SKELETON) self.traj.f_load_item((pypetconstants.LEAF,self.traj.TestResItem,(),{'load_only': 'TestResItem'})) self.assertTrue(self.traj.TestResItem, 42) def test_store_single_group(self): self.traj.f_store() self.traj.f_add_parameter_group('new.test.group').v_annotations.f_set(42) self.traj.f_store_item('new.group') # group is below test not new, so ValueError thrown: with self.assertRaises(ValueError): self.traj.parameters.new.f_remove_child('group') # group is below test not new, so ValueError thrown: with self.assertRaises(ValueError): self.traj.parameters.new.f_store_child('group') # group has children and recursive is false with self.assertRaises(TypeError): self.traj.parameters.new.f_remove_child('test') self.traj.new.f_remove_child('test', recursive=True) self.assertTrue('new.group' not in self.traj) self.traj.new.f_load_child('test', recursive=True, load_data=pypetconstants.LOAD_SKELETON) self.assertTrue(self.traj.new.group.v_annotations.annotation, 42) self.traj.f_delete_item('new.test.group', remove_empty_groups=True) with self.assertRaises(pex.DataNotInStorageError): self.traj.parameters.f_load_child('new', recursive=True, load_data=pypetconstants.LOAD_SKELETON) def test_switch_on_all_comments(self): self.explore(self.traj) self.traj.purge_duplicate_comments=0 self.make_run() hdf5file = pt.openFile(self.filename) traj_group = hdf5file.getNode(where='/', name= self.traj.v_name) for node in traj_group._f_walkGroups(): if 'SRVC_LEAF' in node._v_attrs: self.assertTrue('SRVC_INIT_COMMENT' in node._v_attrs, 'There is no comment in node %s!' % node._v_name) hdf5file.close() def test_purge_duplicate_comments_but_check_moving_comments_up_the_hierarchy(self): self.explore(self.traj) with self.assertRaises(RuntimeError): self.traj.purge_duplicate_comments=1 self.traj.overview.results_runs_summary=0 self.make_run() self.traj.f_get('purge_duplicate_comments').f_unlock() self.traj.purge_duplicate_comments=1 self.traj.f_get('results_runs_summary').f_unlock() self.traj.overview.results_runs_summary=1 # We fake that the trajectory starts with run_00000001 self.traj._run_information['run_00000000']['completed']=1 self.make_run() # Noe we make the first run self.traj._run_information['run_00000000']['completed']=0 self.make_run() hdf5file = pt.openFile(self.filename, mode='a') try: traj_group = hdf5file.getNode(where='/', name= self.traj.v_name) for node in traj_group._f_walkGroups(): if 'SRVC_LEAF' in node._v_attrs: if ('run_' in node._v_pathname and not pypetconstants.RUN_NAME_DUMMY in node._v_pathname): #comment_run_name=self.get_comment_run_name(traj_group, node._v_pathname, node._v_name) comment_run_name = 'run_00000000' if comment_run_name in node._v_pathname: self.assertTrue('SRVC_INIT_COMMENT' in node._v_attrs, 'There is no comment in node %s!' % node._v_name) else: self.assertTrue(not ('SRVC_INIT_COMMENT' in node._v_attrs), 'There is a comment in node %s!' % node._v_name) else: self.assertTrue('SRVC_INIT_COMMENT' in node._v_attrs, 'There is no comment in node %s!' % node._v_name) finally: hdf5file.close() def test_purge_duplicate_comments(self): self.explore(self.traj) with self.assertRaises(RuntimeError): self.traj.purge_duplicate_comments=1 self.traj.overview.results_runs_summary=0 self.make_run() self.traj.f_get('purge_duplicate_comments').f_unlock() self.traj.purge_duplicate_comments=1 self.traj.f_get('results_runs_summary').f_unlock() self.traj.overview.results_runs_summary=1 self.make_run() hdf5file = pt.openFile(self.filename, mode='a') try: traj_group = hdf5file.getNode(where='/', name= self.traj.v_name) for node in traj_group._f_walkGroups(): if 'SRVC_LEAF' in node._v_attrs: if ('run_' in node._v_pathname and not pypetconstants.RUN_NAME_DUMMY in node._v_pathname): comment_run_name = 'run_00000000' if comment_run_name in node._v_pathname: self.assertTrue('SRVC_INIT_COMMENT' in node._v_attrs, 'There is no comment in node %s!' % node._v_name) else: self.assertTrue(not ('SRVC_INIT_COMMENT' in node._v_attrs), 'There is a comment in node %s!' % node._v_name) else: self.assertTrue('SRVC_INIT_COMMENT' in node._v_attrs, 'There is no comment in node %s!' % node._v_name) finally: hdf5file.close()
class LinkMergeTest(TrajectoryComparator): tags = 'integration', 'hdf5', 'environment', 'links', 'merge' def test_merge_with_linked_derived_parameter(self, disable_logging=True): logging.basicConfig(level=logging.ERROR) self.logfolder = make_temp_dir( os.path.join('experiments', 'tests', 'Log')) random.seed() self.trajname1 = 'T1' + make_trajectory_name(self) self.trajname2 = 'T2' + make_trajectory_name(self) self.filename = make_temp_dir( os.path.join('experiments', 'tests', 'HDF5', 'test%s.hdf5' % self.trajname1)) self.env1 = Environment(trajectory=self.trajname1, filename=self.filename, file_title=self.trajname1, log_stdout=False, log_config=get_log_config()) self.env2 = Environment(trajectory=self.trajname2, filename=self.filename, file_title=self.trajname2, log_stdout=False, log_config=get_log_config()) self.traj1 = self.env1.v_trajectory self.traj2 = self.env2.v_trajectory create_link_params(self.traj1) create_link_params(self.traj2) explore_params(self.traj1) explore_params2(self.traj2) self.traj1.f_add_derived_parameter('test.$.gg', 42) self.traj2.f_add_derived_parameter('test.$.gg', 44) self.traj1.f_add_derived_parameter('test.hh.$', 111) self.traj2.f_add_derived_parameter('test.hh.$', 53) self.env1.f_run(dostuff_and_add_links) self.env2.f_run(dostuff_and_add_links) old_length = len(self.traj1) self.traj1.f_merge(self.traj2, remove_duplicates=True) self.traj1.f_load(load_data=2) for run in self.traj1.f_get_run_names(): self.traj1.v_crun = run idx = self.traj1.v_idx param = self.traj1['test.crun.gg'] if idx < old_length: self.assertTrue(param == 42) else: self.assertTrue(param == 44) param = self.traj1['test.hh.crun'] if idx < old_length: self.assertTrue(param == 111) else: self.assertTrue(param == 53) self.assertTrue(len(self.traj1) > old_length) for irun in range(len(self.traj1.f_get_run_names())): self.assertTrue(self.traj1.res['r_%d' % irun] == self.traj1.paramB) self.assertTrue( self.traj1.res.runs['r_%d' % irun].paraBL == self.traj1.paramB) if disable_logging: self.env1.f_disable_logging() self.env2.f_disable_logging() return old_length def test_remerging(self): prev_old_length = self.test_merge_with_linked_derived_parameter( disable_logging=False) name = self.traj1 self.bfilename = make_temp_dir( os.path.join('experiments', 'tests', 'HDF5', 'backup_test%s.hdf5' % self.trajname1)) self.traj1.f_load(load_data=2) self.traj1.f_backup(backup_filename=self.bfilename) self.traj3 = load_trajectory(index=-1, filename=self.bfilename, load_all=2) old_length = len(self.traj1) self.traj1.f_merge(self.traj3, backup=False, remove_duplicates=False) self.assertTrue(len(self.traj1) > old_length) self.traj1.f_load(load_data=2) for run in self.traj1.f_get_run_names(): self.traj1.v_crun = run idx = self.traj1.v_idx param = self.traj1['test.crun.gg'] if idx < prev_old_length or old_length <= idx < prev_old_length + old_length: self.assertTrue(param == 42, '%s != 42' % str(param)) else: self.assertTrue(param == 44, '%s != 44' % str(param)) param = self.traj1['test.hh.crun'] if idx < prev_old_length or old_length <= idx < prev_old_length + old_length: self.assertTrue(param == 111, '%s != 111' % str(param)) else: self.assertTrue(param == 53, '%s != 53' % str(param)) self.assertTrue(len(self.traj1) > old_length) for irun in range(len(self.traj1.f_get_run_names())): self.assertTrue( self.traj1.res.runs['r_%d' % irun].paraBL == self.traj1.paramB) self.assertTrue(self.traj1.res['r_%d' % irun] == self.traj1.paramB) self.env1.f_disable_logging() self.env2.f_disable_logging()
class ResultSortTest(TrajectoryComparator): tags = 'integration', 'hdf5', 'environment' def set_mode(self): self.mode = 'LOCK' self.multiproc = False self.ncores = 1 self.use_pool=True self.log_stdout=False self.freeze_input=False self.use_scoop = False self.log_config = True self.port = None self.graceful_exit = True def tearDown(self): self.env.f_disable_logging() super(ResultSortTest, self).tearDown() def setUp(self): self.set_mode() self.filename = make_temp_dir(os.path.join('experiments','tests','HDF5','sort_tests.hdf5')) self.trajname = make_trajectory_name(self) env = Environment(trajectory=self.trajname,filename=self.filename, file_title=self.trajname, log_stdout=self.log_stdout, log_config=get_log_config() if self.log_config else None, multiproc=self.multiproc, wrap_mode=self.mode, ncores=self.ncores, use_pool=self.use_pool, use_scoop=self.use_scoop, port=self.port, freeze_input=self.freeze_input, graceful_exit=self.graceful_exit) traj = env.v_trajectory traj.v_standard_parameter=Parameter traj.f_add_parameter('x',99) traj.f_add_parameter('y',99) self.env=env self.traj=traj def load_trajectory(self,trajectory_index=None,trajectory_name=None,as_new=False, how=2): ### Load The Trajectory and check if the values are still the same newtraj = Trajectory() newtraj.v_storage_service=HDF5StorageService(filename=self.filename) newtraj.f_load(name=trajectory_name, index=trajectory_index, as_new=as_new, load_derived_parameters=how, load_results=how) return newtraj def explore(self,traj): self.explore_dict={'x':[-1,1,2,3,4],'y':[1,1,2,2,3]} traj.f_explore(self.explore_dict) def explore_cartesian(self,traj): self.explore_dict=cartesian_product({'x':[-1,1,2,3,4, 5, 6],'y':[1,1,2,2,3,4,4]}) traj.f_explore(self.explore_dict) def expand(self,traj): self.expand_dict={'x':[10,11,12,13],'y':[11,11,12,12,13]} with self.assertRaises(ValueError): traj.f_expand(self.expand_dict) self.expand_dict={'x':[10,11,12,13],'y':[11,11,12,12]} traj.f_expand(self.expand_dict) def test_if_results_are_sorted_correctly_manual_runs(self): ###Explore self.explore(self.traj) self.traj.f_store(only_init=True) man_multiply = manual_run()(multiply_with_storing) for idx in self.traj.f_iter_runs(yields='idx'): self.assertTrue(isinstance(idx, int)) man_multiply(self.traj) traj = self.traj traj.f_store() self.assertTrue(len(traj), 5) self.assertTrue(len(traj) == len(list(self.explore_dict.values())[0])) self.traj.f_load_skeleton() self.traj.f_load_items(self.traj.f_to_dict().keys(), only_empties=True) self.check_if_z_is_correct(traj) newtraj = self.load_trajectory(trajectory_name=self.traj.v_name,as_new=False) self.traj.f_load_skeleton() self.traj.f_load_items(self.traj.f_to_dict().keys(), only_empties=True) self.compare_trajectories(self.traj,newtraj) def test_if_results_are_sorted_correctly_using_map(self): ###Explore self.explore(self.traj) args1=[10*x for x in range(len(self.traj))] args2=[100*x for x in range(len(self.traj))] args3=list(range(len(self.traj))) results = self.env.f_run_map(multiply_args, args1, arg2=args2, arg3=args3) self.assertEqual(len(results), len(self.traj)) traj = self.traj self.assertTrue(len(traj) == len(list(self.explore_dict.values())[0])) self.traj.f_load_skeleton() self.traj.f_load_items(self.traj.f_to_dict().keys(), only_empties=True) self.check_if_z_is_correct_map(traj, args1, args2, args3) for res in results: self.assertEqual(len(res), 2) self.assertTrue(isinstance(res[0], int)) self.assertTrue(isinstance(res[1], int)) idx = res[0] self.assertEqual(self.traj.res.runs[idx].z, res[1]) newtraj = self.load_trajectory(trajectory_name=self.traj.v_name,as_new=False) self.traj.f_load_skeleton() self.traj.f_load_items(self.traj.f_to_dict().keys(), only_empties=True) self.assertEqual(len(traj), 5) self.compare_trajectories(self.traj,newtraj) def test_if_results_are_sorted_correctly(self): ###Explore self.explore(self.traj) results = self.env.f_run(multiply) self.are_results_in_order(results) self.assertEqual(len(results), len(self.traj)) traj = self.traj self.assertTrue(len(traj) == len(list(self.explore_dict.values())[0])) self.traj.f_load_skeleton() self.traj.f_load_items(self.traj.f_to_dict().keys(), only_empties=True) self.check_if_z_is_correct(traj) for res in results: self.assertEqual(len(res), 2) self.assertTrue(isinstance(res[0], int)) self.assertTrue(isinstance(res[1], int)) idx = res[0] self.assertEqual(self.traj.res.runs[idx].z, res[1]) newtraj = self.load_trajectory(trajectory_name=self.traj.v_name,as_new=False) self.traj.f_load_skeleton() self.traj.f_load_items(self.traj.f_to_dict().keys(), only_empties=True) self.compare_trajectories(self.traj,newtraj) def test_graceful_exit(self): ###Explore self.explore_cartesian(self.traj) results = self.env.f_run(multiply_with_graceful_exit) self.are_results_in_order(results) self.assertFalse(self.traj.f_is_completed()) def test_f_iter_runs(self): ###Explore self.explore(self.traj) results = self.env.f_run(multiply) self.are_results_in_order(results) traj = self.traj self.assertTrue(len(traj) == len(list(self.explore_dict.values())[0])) self.traj.f_load_skeleton() self.traj.f_load_items(self.traj.f_to_dict().keys(), only_empties=True) self.check_if_z_is_correct(traj) newtraj = self.load_trajectory(trajectory_name=self.traj.v_name,as_new=False) self.traj.f_load_skeleton() self.traj.f_load_items(self.traj.f_to_dict().keys(), only_empties=True) for idx, run_name in enumerate(self.traj.f_iter_runs()): newtraj.v_crun=run_name self.traj.v_idx = idx newtraj.v_idx = idx nameset = set((x.v_name for x in traj.f_iter_nodes(predicate=(idx,)))) self.assertTrue('run_%08d' % (idx+1) not in nameset) self.assertTrue('run_%08d' % idx in nameset) self.assertTrue(traj.v_crun == run_name) self.assertTrue(newtraj.crun.z==traj.x*traj.y,' z != x*y: %s != %s * %s' % (str(newtraj.crun.z),str(traj.x),str(traj.y))) for idx, traj in enumerate(self.traj.f_iter_runs(yields='self')): run_name = traj.f_idx_to_run(idx) self.assertTrue(traj is self.traj) newtraj.v_crun=run_name self.traj.v_idx = idx newtraj.v_idx = idx nameset = set((x.v_name for x in traj.f_iter_nodes(predicate=(idx,)))) self.assertTrue('run_%08d' % (idx+1) not in nameset) self.assertTrue('run_%08d' % idx in nameset) self.assertTrue(traj.v_crun == run_name) self.assertTrue(newtraj.crun.z==traj.x*traj.y,' z != x*y: %s != %s * %s' % (str(newtraj.crun.z),str(traj.x),str(traj.y))) for idx, traj in enumerate(self.traj.f_iter_runs(yields='copy')): run_name = traj.f_idx_to_run(idx) self.assertTrue(traj is not self.traj) newtraj.v_crun=run_name self.traj.v_idx = idx newtraj.v_idx = idx nameset = set((x.v_name for x in traj.f_iter_nodes(predicate=(idx,)))) self.assertTrue('run_%08d' % (idx+1) not in nameset) self.assertTrue('run_%08d' % idx in nameset) self.assertTrue(traj.v_crun == run_name) self.assertTrue(newtraj.crun.z==traj.x*traj.y,' z != x*y: %s != %s * %s' % (str(newtraj.crun.z),str(traj.x),str(traj.y))) traj = self.traj self.assertTrue(traj.v_idx == -1) self.assertTrue(traj.v_crun is None) self.assertTrue(traj.v_crun_ == pypetconstants.RUN_NAME_DUMMY) self.assertTrue(newtraj.v_idx == idx) def test_f_iter_runs_auto_load(self): ###Explore self.explore(self.traj) results = self.env.f_run(multiply) self.are_results_in_order(results) traj = self.traj self.assertTrue(len(traj) == len(list(self.explore_dict.values())[0])) self.traj.f_load_skeleton() self.traj.f_load_items(self.traj.f_to_dict().keys(), only_empties=True) self.check_if_z_is_correct(traj) newtraj = Trajectory() newtraj.v_storage_service=HDF5StorageService(filename=self.filename) newtraj.f_load(name=self.traj.v_name, index=None, as_new=False, load_data=0) newtraj.v_auto_load = True newtraj.par.f_load_child('y', load_data=1) for idx, run_name in enumerate(self.traj.f_iter_runs()): newtraj.v_crun=run_name self.traj.v_idx = idx newtraj.v_idx = idx nameset = set((x.v_name for x in traj.f_iter_nodes(predicate=(idx,)))) self.assertTrue('run_%08d' % (idx+1) not in nameset) self.assertTrue('run_%08d' % idx in nameset) self.assertTrue(traj.v_crun == run_name) self.assertTrue(newtraj.res.runs.crun.z==newtraj.par.x*newtraj.par.y,' z != x*y: %s != %s * %s' % (str(newtraj.crun.z),str(newtraj.x),str(newtraj.y))) traj = self.traj self.assertTrue(traj.v_idx == -1) self.assertTrue(traj.v_crun is None) self.assertTrue(traj.v_crun_ == pypetconstants.RUN_NAME_DUMMY) self.assertTrue(newtraj.v_idx == idx) def test_expand(self): ###Explore self.explore(self.traj) results = self.env.f_run(multiply) self.are_results_in_order(results) get_root_logger().info(results) traj = self.traj self.assertEqual(len(traj), len(list(list(self.explore_dict.values())[0]))) self.traj.f_load_skeleton() self.traj.f_load_items(self.traj.f_to_dict().keys(), only_empties=True) self.check_if_z_is_correct(traj) traj_name = self.env.v_trajectory.v_name del self.env self.env = Environment(trajectory=self.traj, log_stdout=False, log_config=get_log_config()) self.traj = self.env.v_trajectory self.traj.f_load(name=traj_name) self.expand(self.traj) results = self.env.f_run(multiply) self.are_results_in_order(results) traj = self.traj self.assertTrue(len(traj) == len(list(self.expand_dict.values())[0])+ len(list(self.explore_dict.values())[0])) self.traj.f_load_skeleton() self.traj.f_load_items(self.traj.f_to_dict().keys(), only_empties=True) self.check_if_z_is_correct(traj) newtraj = self.load_trajectory(trajectory_name=self.traj.v_name,as_new=False) self.traj.f_load_skeleton() self.traj.f_load_items(self.traj.f_to_dict().keys(), only_empties=True) self.compare_trajectories(self.traj,newtraj) def test_expand_after_reload(self): ###Explore self.explore(self.traj) results = self.env.f_run(multiply) self.are_results_in_order(results) traj = self.traj self.assertTrue(len(traj) == len(list(self.explore_dict.values())[0])) self.traj.f_load_skeleton() self.traj.f_load_items(self.traj.f_to_dict().keys(), only_empties=True) self.check_if_z_is_correct(traj) self.expand(self.traj) self.env.f_run(multiply) traj = self.traj self.assertTrue(len(traj) == len(list(self.expand_dict.values())[0])+\ len(list(self.explore_dict.values())[0])) self.traj.f_load_skeleton() self.traj.f_load_items(self.traj.f_to_dict().keys(), only_empties=True) self.check_if_z_is_correct(traj) newtraj = self.load_trajectory(trajectory_name=self.traj.v_name,as_new=False) self.traj.f_load_skeleton() self.traj.f_load_items(self.traj.f_to_dict().keys(), only_empties=True) self.compare_trajectories(self.traj,newtraj) def check_if_z_is_correct_map(self,traj, args1, args2, args3): for x, arg1, arg2, arg3 in zip(range(len(traj)), args1, args2, args3): traj.v_idx=x self.assertTrue(traj.crun.z==traj.x*traj.y+arg1+arg2+arg3,' z != x*y: %s != %s * %s' % (str(traj.crun.z),str(traj.x),str(traj.y))) traj.v_idx=-1 def check_if_z_is_correct(self,traj): traj.v_shortcuts=False for x in range(len(traj)): traj.v_idx=x z = traj.res.runs.crun.z x = traj.par.x y = traj.par.y self.assertTrue(z==x*y,' z != x*y: %s != %s * %s' % (str(z),str(x),str(y))) traj.v_idx=-1 traj.v_shortcuts=True
y.val = x**2 smurf = Result('','','','') z = traj.f_add_result('Nada.Moo',smurf) z.val = y()+1 print 'Dat wars' #multip.log_to_stderr().setLevel(logging.INFO) config['multiproc']=False config['ncores']=2 env = Environment(trajectory='MyExperiment', filename='../experiments/env.hdf5',dynamicly_imported_classes=[BrianParameter]) traj = env.get_trajectory() assert isinstance(traj, Trajectory) par=traj.f_add_parameter('x',param_type=BrianParameter, value=3, unit = 'mV') par.hui='buh' print par() print par.val traj.f_explore(identity, {traj.x.gfn('value'):[1,2,3,4]}) env.f_run(test_run,to_print='test')
class ResultSortTest(TrajectoryComparator): tags = 'integration', 'hdf5', 'environment' def set_mode(self): self.mode = 'LOCK' self.multiproc = False self.ncores = 1 self.use_pool=True self.log_stdout=False self.freeze_pool_input=False def tearDown(self): self.env.f_disable_logging() super(ResultSortTest, self).tearDown() def setUp(self): self.set_mode() self.filename = make_temp_dir(os.path.join('experiments','tests','HDF5','test.hdf5')) self.trajname = make_trajectory_name(self) env = Environment(trajectory=self.trajname,filename=self.filename, file_title=self.trajname, log_stdout=self.log_stdout, log_config=get_log_config(), multiproc=self.multiproc, wrap_mode=self.mode, ncores=self.ncores, use_pool=self.use_pool, freeze_pool_input=self.freeze_pool_input,) traj = env.v_trajectory traj.v_standard_parameter=Parameter traj.f_add_parameter('x',0) traj.f_add_parameter('y',0) self.env=env self.traj=traj def load_trajectory(self,trajectory_index=None,trajectory_name=None,as_new=False): ### Load The Trajectory and check if the values are still the same newtraj = Trajectory() newtraj.v_storage_service=HDF5StorageService(filename=self.filename) newtraj.f_load(name=trajectory_name, index=trajectory_index, as_new=as_new, load_derived_parameters=2, load_results=2) return newtraj def explore(self,traj): self.explore_dict={'x':[0,1,2,3,4],'y':[1,1,2,2,3]} traj.f_explore(self.explore_dict) def expand(self,traj): self.expand_dict={'x':[10,11,12,13],'y':[11,11,12,12,13]} with self.assertRaises(ValueError): traj.f_expand(self.expand_dict) self.expand_dict={'x':[10,11,12,13],'y':[11,11,12,12]} traj.f_expand(self.expand_dict) def test_if_results_are_sorted_correctly(self): ###Explore self.explore(self.traj) self.env.f_run(multiply) traj = self.traj self.assertTrue(len(traj) == len(compat.listvalues(self.explore_dict)[0])) self.traj.f_load_skeleton() self.traj.f_load_items(self.traj.f_to_dict().keys(), only_empties=True) self.check_if_z_is_correct(traj) newtraj = self.load_trajectory(trajectory_name=self.traj.v_name,as_new=False) self.traj.f_load_skeleton() self.traj.f_load_items(self.traj.f_to_dict().keys(), only_empties=True) self.compare_trajectories(self.traj,newtraj) def test_f_iter_runs(self): ###Explore self.explore(self.traj) self.env.f_run(multiply) traj = self.traj self.assertTrue(len(traj) == len(compat.listvalues(self.explore_dict)[0])) self.traj.f_load_skeleton() self.traj.f_load_items(self.traj.f_to_dict().keys(), only_empties=True) self.check_if_z_is_correct(traj) newtraj = self.load_trajectory(trajectory_name=self.traj.v_name,as_new=False) self.traj.f_load_skeleton() self.traj.f_load_items(self.traj.f_to_dict().keys(), only_empties=True) for idx, run_name in enumerate(self.traj.f_iter_runs()): newtraj.v_as_run=run_name self.traj.v_as_run == run_name self.traj.v_idx = idx newtraj.v_idx = idx nameset = set((x.v_name for x in traj.f_iter_nodes(predicate=(idx,)))) self.assertTrue('run_%08d' % (idx+1) not in nameset) self.assertTrue('run_%08d' % idx in nameset) self.assertTrue(traj.v_crun == run_name) self.assertTrue(newtraj.crun.z==traj.x*traj.y,' z != x*y: %s != %s * %s' % (str(newtraj.crun.z),str(traj.x),str(traj.y))) self.assertTrue(traj.v_idx == -1) self.assertTrue(traj.v_crun is None) self.assertTrue(traj.v_crun_ == pypetconstants.RUN_NAME_DUMMY) self.assertTrue(newtraj.v_idx == idx) def test_expand(self): ###Explore self.explore(self.traj) get_root_logger().info(self.env.f_run(multiply)) traj = self.traj self.assertEqual(len(traj), len(list(compat.listvalues(self.explore_dict)[0]))) self.traj.f_load_skeleton() self.traj.f_load_items(self.traj.f_to_dict().keys(), only_empties=True) self.check_if_z_is_correct(traj) traj_name = self.env.v_trajectory.v_name del self.env self.env = Environment(trajectory=self.traj, log_stdout=False, log_config=get_log_config()) self.traj = self.env.v_trajectory self.traj.f_load(name=traj_name) self.expand(self.traj) self.env.f_run(multiply) traj = self.traj self.assertTrue(len(traj) == len(compat.listvalues(self.expand_dict)[0])+ len(compat.listvalues(self.explore_dict)[0])) self.traj.f_load_skeleton() self.traj.f_load_items(self.traj.f_to_dict().keys(), only_empties=True) self.check_if_z_is_correct(traj) newtraj = self.load_trajectory(trajectory_name=self.traj.v_name,as_new=False) self.traj.f_load_skeleton() self.traj.f_load_items(self.traj.f_to_dict().keys(), only_empties=True) self.compare_trajectories(self.traj,newtraj) def test_expand_after_reload(self): ###Explore self.explore(self.traj) self.env.f_run(multiply) traj = self.traj self.assertTrue(len(traj) == len(compat.listvalues(self.explore_dict)[0])) self.traj.f_load_skeleton() self.traj.f_load_items(self.traj.f_to_dict().keys(), only_empties=True) self.check_if_z_is_correct(traj) self.expand(self.traj) self.env.f_run(multiply) traj = self.traj self.assertTrue(len(traj) == len(compat.listvalues(self.expand_dict)[0])+\ len(compat.listvalues(self.explore_dict)[0])) self.traj.f_load_skeleton() self.traj.f_load_items(self.traj.f_to_dict().keys(), only_empties=True) self.check_if_z_is_correct(traj) newtraj = self.load_trajectory(trajectory_name=self.traj.v_name,as_new=False) self.traj.f_load_skeleton() self.traj.f_load_items(self.traj.f_to_dict().keys(), only_empties=True) self.compare_trajectories(self.traj,newtraj) def check_if_z_is_correct(self,traj): for x in range(len(traj)): traj.v_idx=x self.assertTrue(traj.crun.z==traj.x*traj.y,' z != x*y: %s != %s * %s' % (str(traj.crun.z),str(traj.x),str(traj.y))) traj.v_idx=-1
from pypet.environment import Environment from pypet.utils.explore import cartesian_product from pypet import pypetconstants def multiply(traj): """Sophisticated simulation of multiplication""" z=traj.x*traj.y traj.f_add_result('z',z, comment='I am the product of two reals!') # Create an environment that handles running env = Environment(trajectory='Example08',filename='experiments/example_08/HDF5/example_08.hdf5', file_title='Example08', log_folder='experiments/example_08/LOGS/', comment='Another example!') # Get the trajectory from the environment traj = env.v_trajectory # Add both parameters traj.f_add_parameter('x', 1, comment='I am the first dimension!') traj.f_add_parameter('y', 1, comment='I am the second dimension!') # Explore the parameters with a cartesian product: traj.f_explore(cartesian_product({'x':[1,2,3,4], 'y':[6,7,8]})) # Run the simulation env.f_run(multiply)
from pypet.environment import Environment from pypet.utils.explore import cartesian_product # Let's reuse the simple multiplication example def multiply(traj): """Sophisticated simulation of multiplication""" z=traj.x*traj.y traj.f_add_result('z',z=z, comment='I am the product of two reals!',) # Create 2 environments that handle running env1 = Environment(trajectory='Traj1',filename='experiments/example_03/HDF5/example_03.hdf5', file_title='Example_03', log_folder='experiments/example_03/LOGS/', comment='I will be increased!') env2 = Environment(trajectory='Traj2',filename='experiments/example_03/HDF5/example_03.hdf5', file_title='Example_03', log_folder='experiments/example_03/LOGS/', comment = 'I am going to be merged into some other trajectory!') # Get the trajectories from the environment traj1 = env1.v_trajectory traj2 = env2.v_trajectory # Add both parameters traj1.f_add_parameter('x', 1.0, comment='I am the first dimension!') traj1.f_add_parameter('y', 1.0, comment='I am the second dimension!') traj2.f_add_parameter('x', 1.0, comment='I am the first dimension!') traj2.f_add_parameter('y', 1.0, comment='I am the second dimension!')
class LinkMergeTest(TrajectoryComparator): tags = 'integration', 'hdf5', 'environment', 'links', 'merge' def test_merge_with_linked_derived_parameter(self, disable_logging = True): logging.basicConfig(level = logging.ERROR) self.logfolder = make_temp_dir(os.path.join('experiments', 'tests', 'Log')) random.seed() self.trajname1 = 'T1'+ make_trajectory_name(self) self.trajname2 = 'T2'+make_trajectory_name(self) self.filename = make_temp_dir(os.path.join('experiments', 'tests', 'HDF5', 'test%s.hdf5' % self.trajname1)) self.env1 = Environment(trajectory=self.trajname1, filename=self.filename, file_title=self.trajname1, log_stdout=False, log_config=get_log_config()) self.env2 = Environment(trajectory=self.trajname2, filename=self.filename, file_title=self.trajname2, log_stdout=False, log_config=get_log_config()) self.traj1 = self.env1.v_trajectory self.traj2 = self.env2.v_trajectory create_link_params(self.traj1) create_link_params(self.traj2) explore_params(self.traj1) explore_params2(self.traj2) self.traj1.f_add_derived_parameter('test.$.gg', 42) self.traj2.f_add_derived_parameter('test.$.gg', 44) self.traj1.f_add_derived_parameter('test.hh.$', 111) self.traj2.f_add_derived_parameter('test.hh.$', 53) self.env1.f_run(dostuff_and_add_links) self.env2.f_run(dostuff_and_add_links) old_length = len(self.traj1) self.traj1.f_merge(self.traj2, remove_duplicates=True) self.traj1.f_load(load_data=2) for run in self.traj1.f_get_run_names(): self.traj1.v_crun = run idx = self.traj1.v_idx param = self.traj1['test.crun.gg'] if idx < old_length: self.assertTrue(param == 42) else: self.assertTrue(param == 44) param = self.traj1['test.hh.crun'] if idx < old_length: self.assertTrue(param == 111) else: self.assertTrue(param == 53) self.assertTrue(len(self.traj1) > old_length) for irun in range(len(self.traj1.f_get_run_names())): self.assertTrue(self.traj1.res['r_%d' % irun] == self.traj1.paramB) self.assertTrue(self.traj1.res.runs['r_%d' % irun].paraBL == self.traj1.paramB) if disable_logging: self.env1.f_disable_logging() self.env2.f_disable_logging() return old_length def test_remerging(self): prev_old_length = self.test_merge_with_linked_derived_parameter(disable_logging=False) name = self.traj1 self.bfilename = make_temp_dir(os.path.join('experiments', 'tests', 'HDF5', 'backup_test%s.hdf5' % self.trajname1)) self.traj1.f_load(load_data=2) self.traj1.f_backup(backup_filename=self.bfilename) self.traj3 = load_trajectory(index=-1, filename=self.bfilename, load_all=2) old_length = len(self.traj1) self.traj1.f_merge(self.traj3, backup=False, remove_duplicates=False) self.assertTrue(len(self.traj1) > old_length) self.traj1.f_load(load_data=2) for run in self.traj1.f_get_run_names(): self.traj1.v_crun = run idx = self.traj1.v_idx param = self.traj1['test.crun.gg'] if idx < prev_old_length or old_length <= idx < prev_old_length + old_length: self.assertTrue(param == 42, '%s != 42' % str(param)) else: self.assertTrue(param == 44, '%s != 44' % str(param)) param = self.traj1['test.hh.crun'] if idx < prev_old_length or old_length <= idx < prev_old_length + old_length: self.assertTrue(param == 111, '%s != 111' % str(param)) else: self.assertTrue(param == 53, '%s != 53' % str(param)) self.assertTrue(len(self.traj1)>old_length) for irun in range(len(self.traj1.f_get_run_names())): self.assertTrue(self.traj1.res.runs['r_%d' % irun].paraBL == self.traj1.paramB) self.assertTrue(self.traj1.res['r_%d' % irun] == self.traj1.paramB) self.env1.f_disable_logging() self.env2.f_disable_logging()
def test_merge_with_linked_derived_parameter(self, disable_logging=True): logging.basicConfig(level=logging.ERROR) self.logfolder = make_temp_dir( os.path.join('experiments', 'tests', 'Log')) random.seed() self.trajname1 = 'T1' + make_trajectory_name(self) self.trajname2 = 'T2' + make_trajectory_name(self) self.filename = make_temp_dir( os.path.join('experiments', 'tests', 'HDF5', 'test%s.hdf5' % self.trajname1)) self.env1 = Environment(trajectory=self.trajname1, filename=self.filename, file_title=self.trajname1, log_stdout=False, log_config=get_log_config()) self.env2 = Environment(trajectory=self.trajname2, filename=self.filename, file_title=self.trajname2, log_stdout=False, log_config=get_log_config()) self.traj1 = self.env1.v_trajectory self.traj2 = self.env2.v_trajectory create_link_params(self.traj1) create_link_params(self.traj2) explore_params(self.traj1) explore_params2(self.traj2) self.traj1.f_add_derived_parameter('test.$.gg', 42) self.traj2.f_add_derived_parameter('test.$.gg', 44) self.traj1.f_add_derived_parameter('test.hh.$', 111) self.traj2.f_add_derived_parameter('test.hh.$', 53) self.env1.f_run(dostuff_and_add_links) self.env2.f_run(dostuff_and_add_links) old_length = len(self.traj1) self.traj1.f_merge(self.traj2, remove_duplicates=True) self.traj1.f_load(load_data=2) for run in self.traj1.f_get_run_names(): self.traj1.v_crun = run idx = self.traj1.v_idx param = self.traj1['test.crun.gg'] if idx < old_length: self.assertTrue(param == 42) else: self.assertTrue(param == 44) param = self.traj1['test.hh.crun'] if idx < old_length: self.assertTrue(param == 111) else: self.assertTrue(param == 53) self.assertTrue(len(self.traj1) > old_length) for irun in range(len(self.traj1.f_get_run_names())): self.assertTrue(self.traj1.res['r_%d' % irun] == self.traj1.paramB) self.assertTrue( self.traj1.res.runs['r_%d' % irun].paraBL == self.traj1.paramB) if disable_logging: self.env1.f_disable_logging() self.env2.f_disable_logging() return old_length
class EnvironmentTest(TrajectoryComparator): tags = 'integration', 'hdf5', 'environment' def set_mode(self): self.mode = 'LOCK' self.multiproc = False self.gc_interval = None self.ncores = 1 self.use_pool=True self.use_scoop=False self.freeze_input=False self.pandas_format='fixed' self.pandas_append=False self.complib = 'zlib' self.complevel=9 self.shuffle=True self.fletcher32 = False self.encoding = 'utf8' self.log_stdout=False self.wildcard_functions = None self.niceness = None self.port = None self.timeout = None self.add_time=True def explore_complex_params(self, traj): matrices_csr = [] for irun in range(3): spsparse_csr = spsp.lil_matrix((111,111)) spsparse_csr[3,2+irun] = 44.5*irun matrices_csr.append(spsparse_csr.tocsr()) matrices_csc = [] for irun in range(3): spsparse_csc = spsp.lil_matrix((111,111)) spsparse_csc[3,2+irun] = 44.5*irun matrices_csc.append(spsparse_csc.tocsc()) matrices_bsr = [] for irun in range(3): spsparse_bsr = spsp.lil_matrix((111,111)) spsparse_bsr[3,2+irun] = 44.5*irun matrices_bsr.append(spsparse_bsr.tocsr().tobsr()) matrices_dia = [] for irun in range(3): spsparse_dia = spsp.lil_matrix((111,111)) spsparse_dia[3,2+irun] = 44.5*irun matrices_dia.append(spsparse_dia.tocsc().todia()) self.explore_dict={'string':[np.array(['Uno', 'Dos', 'Tres']), np.array(['Cinco', 'Seis', 'Siette']), np.array(['Ocho', 'Nueve', 'Diez'])], 'int':[1,2,3], 'csr_mat' : matrices_csr, 'csc_mat' : matrices_csc, 'bsr_mat' : matrices_bsr, 'dia_mat' : matrices_dia, 'list' : [['fff'],[444444,444,44,4,4,4],[1,2,3,42]]} with self.assertRaises(pex.NotUniqueNodeError): traj.f_explore(self.explore_dict) traj.f_shrink(force=True) par_dict = traj.parameters.f_to_dict() for param_name in par_dict: param = par_dict[param_name] if param.v_name in self.explore_dict: param.f_unlock() if param.v_explored: param._shrink() self.explore_dict={'Numpy.string':[np.array(['Uno', 'Dos', 'Tres']), np.array(['Cinco', 'Seis', 'Siette']), np.array(['Ocho', 'Nueve', 'Diez'])], 'Normal.int':[1,2,3], 'csr_mat' : matrices_csr, 'csc_mat' : matrices_csc, 'bsr_mat' : matrices_bsr, 'dia_mat' : matrices_dia, 'list' : [['fff'],[444444,444,44,4,4,4],[1,2,3,42]]} traj.f_explore(self.explore_dict) def explore(self, traj): self.explored ={'Normal.trial': [0], 'Numpy.double': [np.array([1.0,2.0,3.0,4.0]), np.array([-1.0,3.0,5.0,7.0])], 'csr_mat' :[spsp.lil_matrix((2222,22)), spsp.lil_matrix((2222,22))]} self.explored['csr_mat'][0][1,2]=44.0 self.explored['csr_mat'][1][2,2]=33 self.explored['csr_mat'][0] = self.explored['csr_mat'][0].tocsr() self.explored['csr_mat'][1] = self.explored['csr_mat'][0].tocsr() traj.f_explore(cartesian_product(self.explored)) def explore_large(self, traj): self.explored ={'Normal.trial': [0,1]} traj.f_explore(cartesian_product(self.explored)) def tearDown(self): self.env.f_disable_logging() super(EnvironmentTest, self).tearDown() def setUp(self): self.set_mode() self.logfolder = make_temp_dir(os.path.join('experiments', 'tests', 'Log')) random.seed() self.trajname = make_trajectory_name(self) self.filename = make_temp_dir(os.path.join('experiments', 'tests', 'HDF5', 'test%s.hdf5' % self.trajname)) env = Environment(trajectory=self.trajname, filename=self.filename, file_title=self.trajname, log_stdout=self.log_stdout, log_config=get_log_config(), results_per_run=5, wildcard_functions=self.wildcard_functions, derived_parameters_per_run=5, multiproc=self.multiproc, ncores=self.ncores, wrap_mode=self.mode, use_pool=self.use_pool, gc_interval=self.gc_interval, freeze_input=self.freeze_input, fletcher32=self.fletcher32, complevel=self.complevel, complib=self.complib, shuffle=self.shuffle, pandas_append=self.pandas_append, pandas_format=self.pandas_format, encoding=self.encoding, niceness=self.niceness, use_scoop=self.use_scoop, port=self.port, add_time=self.add_time, timeout=self.timeout) traj = env.v_trajectory traj.v_standard_parameter=Parameter ## Create some parameters self.param_dict={} create_param_dict(self.param_dict) ### Add some parameter: add_params(traj,self.param_dict) #remember the trajectory and the environment self.traj = traj self.env = env @unittest.skipIf(not hasattr(os, 'nice') and psutil is None, 'Niceness not supported under non Unix.') def test_niceness(self): ###Explore self.explore(self.traj) self.env.f_run(with_niceness) self.assertTrue(self.traj.f_is_completed()) def test_file_overwriting(self): self.traj.f_store() with ptcompat.open_file(self.filename, mode='r') as file: nchildren = len(file.root._v_children) self.assertTrue(nchildren > 0) env2 = Environment(filename=self.filename, log_config=get_log_config()) traj2 = env2.v_trajectory traj2.f_store() self.assertTrue(os.path.exists(self.filename)) with ptcompat.open_file(self.filename, mode='r') as file: nchildren = len(file.root._v_children) self.assertTrue(nchildren > 1) env3 = Environment(filename=self.filename, overwrite_file=True, log_config=get_log_config()) self.assertFalse(os.path.exists(self.filename)) env2.f_disable_logging() env3.f_disable_logging() def test_time_display_of_loading(self): filename = make_temp_dir('sloooow.hdf5') env = Environment(trajectory='traj', add_time=True, filename=filename, log_stdout=False, log_config=get_log_config(), dynamic_imports=SlowResult, display_time=0.1) traj = env.v_traj res=traj.f_add_result(SlowResult, 'iii', 42, 43, comment='llk') traj.f_store() service_logger = traj.v_storage_service._logger root = logging.getLogger('pypet') old_level = root.level service_logger.setLevel(logging.INFO) root.setLevel(logging.INFO) traj.f_load(load_data=3) service_logger.setLevel(old_level) root.setLevel(old_level) path = get_log_path(traj) mainfilename = os.path.join(path, 'LOG.txt') with open(mainfilename, mode='r') as mainf: full_text = mainf.read() self.assertTrue('nodes/s)' in full_text) env.f_disable_logging() def make_run_large_data(self): self.env.f_run(add_large_data) def make_run(self): ### Make a test run simple_arg = -13 simple_kwarg= 13.0 results = self.env.f_run(simple_calculations,simple_arg,simple_kwarg=simple_kwarg) self.are_results_in_order(results) def test_a_large_run(self): get_root_logger().info('Testing large run') self.traj.f_add_parameter('TEST', 'test_run') ###Explore self.explore_large(self.traj) self.make_run_large_data() self.assertTrue(self.traj.f_is_completed()) # Check if printing and repr work get_root_logger().info(str(self.env)) get_root_logger().info(repr(self.env)) newtraj = Trajectory() newtraj.f_load(name=self.traj.v_name, as_new=False, load_data=2, filename=self.filename) self.traj.f_load_skeleton() self.traj.f_load_items(self.traj.f_to_dict().keys(), only_empties=True) self.compare_trajectories(self.traj,newtraj) size=os.path.getsize(self.filename) size_in_mb = size/1000000. get_root_logger().info('Size is %sMB' % str(size_in_mb)) self.assertTrue(size_in_mb < 30.0, 'Size is %sMB > 30MB' % str(size_in_mb)) def test_two_runs(self): self.traj.f_add_parameter('TEST', 'test_run') self.traj.hdf5.purge_duplicate_comments = False ###Explore self.explore(self.traj) self.make_run() newtraj = self.load_trajectory(trajectory_name=self.traj.v_name,as_new=False) self.traj.f_load_skeleton() self.traj.f_load_items(self.traj.f_to_dict().keys(), only_empties=True) self.compare_trajectories(self.traj, newtraj) size=os.path.getsize(self.filename) size_in_mb = size/1000000. get_root_logger().info('Size is %sMB' % str(size_in_mb)) self.assertTrue(size_in_mb < 6.0, 'Size is %sMB > 6MB' % str(size_in_mb)) mp_traj = self.traj old_multiproc = self.multiproc self.multiproc = False ### Make a new single core run self.setUp() self.traj.f_add_parameter('TEST', 'test_run') self.traj.hdf5.purge_duplicate_comments = False ###Explore self.explore(self.traj) self.make_run() # newtraj = self.load_trajectory(trajectory_name=self.traj.v_name,as_new=False) self.traj.f_load_skeleton() self.traj.f_load_items(self.traj.f_to_dict().keys(), only_empties=True) self.compare_trajectories(self.traj, newtraj) size=os.path.getsize(self.filename) size_in_mb = size/1000000. get_root_logger().info('Size is %sMB' % str(size_in_mb)) self.assertTrue(size_in_mb < 6.0, 'Size is %sMB > 6MB' % str(size_in_mb)) self.compare_trajectories(mp_traj, self.traj) self.multiproc = old_multiproc def test_errors(self): tmp = make_temp_dir('cont') if dill is not None: env1 = Environment(continuable=True, continue_folder=tmp, log_config=None, filename=self.filename) with self.assertRaises(ValueError): env1.f_run_map(multiply_args, [1], [2], [3]) with self.assertRaises(ValueError): Environment(multiproc=True, use_pool=False, freeze_input=True, filename=self.filename, log_config=None) env3 = Environment(log_config=None, filename=self.filename) with self.assertRaises(ValueError): env3.f_run_map(multiply_args) with self.assertRaises(ValueError): Environment(use_scoop=True, immediate_postproc=True) with self.assertRaises(ValueError): Environment(use_pool=True, immediate_postproc=True) with self.assertRaises(ValueError): Environment(continuable=True, wrap_mode='QUEUE', continue_folder=tmp) with self.assertRaises(ValueError): Environment(use_scoop=True, wrap_mode='QUEUE') with self.assertRaises(ValueError): Environment(automatic_storing=False, continuable=True, continue_folder=tmp) with self.assertRaises(ValueError): Environment(port='www.nosi.de', wrap_mode='LOCK') def test_run(self): self.traj.f_add_parameter('TEST', 'test_run') ###Explore self.explore(self.traj) self.make_run() self.assertTrue(self.traj.f_is_completed()) newtraj = self.load_trajectory(trajectory_name=self.traj.v_name,as_new=False) self.traj.f_load_skeleton() self.traj.f_load_items(self.traj.f_to_dict().keys(), only_empties=True) self.compare_trajectories(self.traj, newtraj) size=os.path.getsize(self.filename) size_in_mb = size/1000000. get_root_logger().info('Size is %sMB' % str(size_in_mb)) self.assertTrue(size_in_mb < 6.0, 'Size is %sMB > 6MB' % str(size_in_mb)) def test_just_one_run(self): self.make_run() self.assertTrue(self.traj.f_is_completed()) newtraj = self.load_trajectory(trajectory_name=self.traj.v_name,as_new=False) self.traj.f_load_skeleton() self.traj.f_load_items(self.traj.f_to_dict().keys(), only_empties=True) self.compare_trajectories(self.traj, newtraj) self.assertTrue(len(newtraj) == 1) size=os.path.getsize(self.filename) size_in_mb = size/1000000. get_root_logger().info('Size is %sMB' % str(size_in_mb)) self.assertTrue(size_in_mb < 2.0, 'Size is %sMB > 6MB' % str(size_in_mb)) with self.assertRaises(TypeError): self.explore(self.traj) def test_run_complex(self): self.traj.f_add_parameter('TEST', 'test_run_complex') ###Explore self.explore_complex_params(self.traj) self.make_run() self.assertTrue(self.traj.f_is_completed()) newtraj = self.load_trajectory(trajectory_name=self.traj.v_name,as_new=False) self.traj.f_update_skeleton() self.traj.f_load_items(self.traj.f_to_dict().keys(), only_empties=True) self.compare_trajectories(self.traj, newtraj) def load_trajectory(self,trajectory_index=None,trajectory_name=None,as_new=False): ### Load The Trajectory and check if the values are still the same newtraj = Trajectory() newtraj.v_storage_service=HDF5StorageService(filename=self.filename) newtraj.f_load(name=trajectory_name, index=trajectory_index, as_new=as_new, load_parameters=2, load_derived_parameters=2, load_results=2, load_other_data=2) return newtraj def test_expand(self): ###Explore self.traj.f_add_parameter('TEST', 'test_expand') self.explore(self.traj) self.make_run() self.expand() get_root_logger().info('\n $$$$$$$$$$$$$$$$$ Second Run $$$$$$$$$$$$$$$$$$$$$$$$') self.make_run() newtraj = self.load_trajectory(trajectory_name=self.traj.v_name,as_new=False) self.traj.f_load_skeleton() self.traj.f_load_items(self.traj.f_to_dict().keys(), only_empties=True) self.compare_trajectories(self.traj, newtraj) def test_expand_after_reload(self): self.traj.f_add_parameter('TEST', 'test_expand_after_reload') ###Explore self.explore(self.traj) self.make_run() traj_name = self.traj.v_name self.env = Environment(trajectory=self.traj, log_stdout=False, log_config=get_log_config()) self.traj = self.env.v_trajectory self.traj.f_load(name=traj_name) self.traj.res.f_remove() self.traj.dpar.f_remove() self.expand() get_root_logger().info('\n $$$$$$$$$$$$ Second Run $$$$$$$$$$ \n') self.make_run() newtraj = self.load_trajectory(trajectory_name=self.traj.v_name,as_new=False) self.traj.f_load_skeleton() self.traj.f_load_items(self.traj.f_to_dict().keys(), only_empties=True) self.compare_trajectories(self.traj, newtraj) def expand(self): self.expanded ={'Normal.trial': [1], 'Numpy.double': [np.array([1.0,2.0,3.0,4.0]), np.array([-1.0,3.0,5.0,7.0])], 'csr_mat' :[spsp.lil_matrix((2222,22)), spsp.lil_matrix((2222,22))]} self.expanded['csr_mat'][0][1,2]=44.0 self.expanded['csr_mat'][1][2,2]=33 self.expanded['csr_mat'][0]=self.expanded['csr_mat'][0].tocsr() self.expanded['csr_mat'][1]=self.expanded['csr_mat'][1].tocsr() self.traj.f_expand(cartesian_product(self.expanded)) self.traj.f_store() ################## Overview TESTS ############################# def test_switch_ON_large_tables(self): self.traj.f_add_parameter('TEST', 'test_switch_ON_LARGE_tables') ###Explore self.explore(self.traj) self.env.f_set_large_overview(True) self.make_run() hdf5file = pt.openFile(self.filename) overview_group = hdf5file.getNode(where='/'+ self.traj.v_name, name='overview') should = ['derived_parameters_overview', 'results_overview'] for name in should: self.assertTrue(name in overview_group, '%s not in overviews but it should!' % name) hdf5file.close() self.traj.f_load(load_parameters=2, load_derived_parameters=2, load_results=2) newtraj = self.load_trajectory(trajectory_name=self.traj.v_name) self.compare_trajectories(newtraj,self.traj) def test_switch_off_all_tables(self): ###Explore self.traj.f_add_parameter('TEST', 'test_switch_off_ALL_tables') self.explore(self.traj) self.env.f_switch_off_all_overview() self.make_run() hdf5file = pt.openFile(self.filename) overview_group = hdf5file.getNode(where='/'+ self.traj.v_name, name='overview') should_not = HDF5StorageService.NAME_TABLE_MAPPING.keys() for name in should_not: name = name.split('.')[-1] # Get only the name of the table, no the full name self.assertTrue(not name in overview_group, '%s in overviews but should not!' % name) hdf5file.close() def test_store_form_tuple(self): self.traj.f_store() self.traj.f_add_result('TestResItem', 42, 43) with self.assertRaises(ValueError): self.traj.f_store_item((pypetconstants.LEAF, self.traj.TestResItem,(),{},5)) self.traj.f_store_item((pypetconstants.LEAF, self.traj.TestResItem)) self.traj.results.f_remove_child('TestResItem') self.assertTrue('TestResItem' not in self.traj) self.traj.results.f_load_child('TestResItem', load_data=pypetconstants.LOAD_SKELETON) self.traj.f_load_item((pypetconstants.LEAF,self.traj.TestResItem,(),{'load_only': 'TestResItem'})) self.assertTrue(self.traj.TestResItem, 42) def test_store_single_group(self): self.traj.f_store() self.traj.f_add_parameter_group('new.test.group').v_annotations.f_set(42) self.traj.f_store_item('new.group') # group is below test not new, so ValueError thrown: with self.assertRaises(ValueError): self.traj.parameters.new.f_remove_child('group') # group is below test not new, so ValueError thrown: with self.assertRaises(ValueError): self.traj.parameters.new.f_store_child('group') # group has children and recursive is false with self.assertRaises(TypeError): self.traj.parameters.new.f_remove_child('test') self.traj.new.f_remove_child('test', recursive=True) self.assertTrue('new.group' not in self.traj) self.traj.new.f_load_child('test', recursive=True, load_data=pypetconstants.LOAD_SKELETON) self.assertTrue(self.traj.new.group.v_annotations.annotation, 42) self.traj.f_delete_item('new.test.group') with self.assertRaises(pex.DataNotInStorageError): self.traj.parameters.f_load_child('new.test.group', load_data=pypetconstants.LOAD_SKELETON) def test_switch_on_all_comments(self): self.explore(self.traj) self.traj.hdf5.purge_duplicate_comments=0 self.make_run() hdf5file = pt.openFile(self.filename) traj_group = hdf5file.getNode(where='/', name= self.traj.v_name) for node in traj_group._f_walkGroups(): if 'SRVC_LEAF' in node._v_attrs: self.assertTrue('SRVC_INIT_COMMENT' in node._v_attrs, 'There is no comment in node %s!' % node._v_name) hdf5file.close() def test_purge_duplicate_comments(self): self.explore(self.traj) with self.assertRaises(RuntimeError): self.traj.hdf5.purge_duplicate_comments = 1 self.traj.overview.results_summary = 0 self.make_run() self.traj.f_get('purge_duplicate_comments').f_unlock() self.traj.hdf5.purge_duplicate_comments=1 self.traj.f_get('results_summary').f_unlock() self.traj.overview.results_summary=1 self.make_run() hdf5file = pt.openFile(self.filename, mode='a') ncomments = {} try: traj_group = hdf5file.getNode(where='/',name= self.traj.v_name) for node in traj_group._f_walkGroups(): if ('/derived_parameters/' in node._v_pathname or '/results/' in node._v_pathname): if 'SRVC_LEAF' in node._v_attrs: if 'SRVC_INIT_COMMENT' in node._v_attrs: comment = node._v_attrs['SRVC_INIT_COMMENT'] if comment not in ncomments: ncomments[comment] = 0 ncomments[comment] += 1 finally: hdf5file.close() self.assertGreaterEqual(len(ncomments), 1) self.assertTrue(all(x == 1 for x in ncomments.values())) def test_NOT_purge_duplicate_comments(self): self.explore(self.traj) self.traj.f_get('purge_duplicate_comments').f_unlock() self.traj.hdf5.purge_duplicate_comments=0 self.traj.f_get('results_summary').f_unlock() self.traj.overview.results_summary=0 self.make_run() hdf5file = pt.openFile(self.filename, mode='a') ncomments = {} try: traj_group = hdf5file.getNode(where='/',name= self.traj.v_name) for node in traj_group._f_walkGroups(): if ('/derived_parameters/' in node._v_pathname or '/results/' in node._v_pathname): if 'SRVC_LEAF' in node._v_attrs: if 'SRVC_INIT_COMMENT' in node._v_attrs: comment = node._v_attrs['SRVC_INIT_COMMENT'] if comment not in ncomments: ncomments[comment] = 0 ncomments[comment] += 1 finally: hdf5file.close() self.assertGreaterEqual(len(ncomments), 1) self.assertTrue(any(x > 1 for x in ncomments.values()))
class ResultSortTest(TrajectoryComparator): tags = 'integration', 'hdf5', 'environment' def set_mode(self): self.mode = 'LOCK' self.multiproc = False self.ncores = 1 self.use_pool=True self.log_stdout=False self.freeze_input=False self.use_scoop = False self.log_config = True self.port = None def tearDown(self): self.env.f_disable_logging() super(ResultSortTest, self).tearDown() def setUp(self): self.set_mode() self.filename = make_temp_dir(os.path.join('experiments','tests','HDF5','sort_tests.hdf5')) self.trajname = make_trajectory_name(self) env = Environment(trajectory=self.trajname,filename=self.filename, file_title=self.trajname, log_stdout=self.log_stdout, log_config=get_log_config() if self.log_config else None, multiproc=self.multiproc, wrap_mode=self.mode, ncores=self.ncores, use_pool=self.use_pool, use_scoop=self.use_scoop, port=self.port, freeze_input=self.freeze_input,) traj = env.v_trajectory traj.v_standard_parameter=Parameter traj.f_add_parameter('x',0) traj.f_add_parameter('y',0) self.env=env self.traj=traj def load_trajectory(self,trajectory_index=None,trajectory_name=None,as_new=False): ### Load The Trajectory and check if the values are still the same newtraj = Trajectory() newtraj.v_storage_service=HDF5StorageService(filename=self.filename) newtraj.f_load(name=trajectory_name, index=trajectory_index, as_new=as_new, load_derived_parameters=2, load_results=2) return newtraj def explore(self,traj): self.explore_dict={'x':[0,1,2,3,4],'y':[1,1,2,2,3]} traj.f_explore(self.explore_dict) def expand(self,traj): self.expand_dict={'x':[10,11,12,13],'y':[11,11,12,12,13]} with self.assertRaises(ValueError): traj.f_expand(self.expand_dict) self.expand_dict={'x':[10,11,12,13],'y':[11,11,12,12]} traj.f_expand(self.expand_dict) def test_if_results_are_sorted_correctly_manual_runs(self): ###Explore self.explore(self.traj) self.traj.f_store(only_init=True) man_multiply = manual_run()(multiply_with_storing) for idx in self.traj.f_iter_runs(yields='idx'): self.assertTrue(isinstance(idx, int)) man_multiply(self.traj) traj = self.traj traj.f_store() self.assertTrue(len(traj), 5) self.assertTrue(len(traj) == len(compat.listvalues(self.explore_dict)[0])) self.traj.f_load_skeleton() self.traj.f_load_items(self.traj.f_to_dict().keys(), only_empties=True) self.check_if_z_is_correct(traj) newtraj = self.load_trajectory(trajectory_name=self.traj.v_name,as_new=False) self.traj.f_load_skeleton() self.traj.f_load_items(self.traj.f_to_dict().keys(), only_empties=True) self.compare_trajectories(self.traj,newtraj) def test_if_results_are_sorted_correctly_using_map(self): ###Explore self.explore(self.traj) args1=[10*x for x in range(len(self.traj))] args2=[100*x for x in range(len(self.traj))] args3=list(range(len(self.traj))) results = self.env.f_run_map(multiply_args, args1, arg2=args2, arg3=args3) self.assertEqual(len(results), len(self.traj)) traj = self.traj self.assertTrue(len(traj) == len(compat.listvalues(self.explore_dict)[0])) self.traj.f_load_skeleton() self.traj.f_load_items(self.traj.f_to_dict().keys(), only_empties=True) self.check_if_z_is_correct_map(traj, args1, args2, args3) for res in results: self.assertEqual(len(res), 2) self.assertTrue(isinstance(res[0], int)) self.assertTrue(isinstance(res[1], int)) idx = res[0] self.assertEqual(self.traj.res.runs[idx].z, res[1]) newtraj = self.load_trajectory(trajectory_name=self.traj.v_name,as_new=False) self.traj.f_load_skeleton() self.traj.f_load_items(self.traj.f_to_dict().keys(), only_empties=True) self.assertEqual(len(traj), 5) self.compare_trajectories(self.traj,newtraj) def test_if_results_are_sorted_correctly(self): ###Explore self.explore(self.traj) results = self.env.f_run(multiply) self.are_results_in_order(results) self.assertEqual(len(results), len(self.traj)) traj = self.traj self.assertTrue(len(traj) == len(compat.listvalues(self.explore_dict)[0])) self.traj.f_load_skeleton() self.traj.f_load_items(self.traj.f_to_dict().keys(), only_empties=True) self.check_if_z_is_correct(traj) for res in results: self.assertEqual(len(res), 2) self.assertTrue(isinstance(res[0], int)) self.assertTrue(isinstance(res[1], int)) idx = res[0] self.assertEqual(self.traj.res.runs[idx].z, res[1]) newtraj = self.load_trajectory(trajectory_name=self.traj.v_name,as_new=False) self.traj.f_load_skeleton() self.traj.f_load_items(self.traj.f_to_dict().keys(), only_empties=True) self.compare_trajectories(self.traj,newtraj) def test_f_iter_runs(self): ###Explore self.explore(self.traj) results = self.env.f_run(multiply) self.are_results_in_order(results) traj = self.traj self.assertTrue(len(traj) == len(compat.listvalues(self.explore_dict)[0])) self.traj.f_load_skeleton() self.traj.f_load_items(self.traj.f_to_dict().keys(), only_empties=True) self.check_if_z_is_correct(traj) newtraj = self.load_trajectory(trajectory_name=self.traj.v_name,as_new=False) self.traj.f_load_skeleton() self.traj.f_load_items(self.traj.f_to_dict().keys(), only_empties=True) for idx, run_name in enumerate(self.traj.f_iter_runs()): newtraj.v_as_run=run_name self.traj.v_as_run == run_name self.traj.v_idx = idx newtraj.v_idx = idx nameset = set((x.v_name for x in traj.f_iter_nodes(predicate=(idx,)))) self.assertTrue('run_%08d' % (idx+1) not in nameset) self.assertTrue('run_%08d' % idx in nameset) self.assertTrue(traj.v_crun == run_name) self.assertTrue(newtraj.crun.z==traj.x*traj.y,' z != x*y: %s != %s * %s' % (str(newtraj.crun.z),str(traj.x),str(traj.y))) for idx, traj in enumerate(self.traj.f_iter_runs(yields='self')): run_name = traj.f_idx_to_run(idx) self.assertTrue(traj is self.traj) newtraj.v_as_run=run_name self.traj.v_as_run == run_name self.traj.v_idx = idx newtraj.v_idx = idx nameset = set((x.v_name for x in traj.f_iter_nodes(predicate=(idx,)))) self.assertTrue('run_%08d' % (idx+1) not in nameset) self.assertTrue('run_%08d' % idx in nameset) self.assertTrue(traj.v_crun == run_name) self.assertTrue(newtraj.crun.z==traj.x*traj.y,' z != x*y: %s != %s * %s' % (str(newtraj.crun.z),str(traj.x),str(traj.y))) for idx, traj in enumerate(self.traj.f_iter_runs(yields='copy')): run_name = traj.f_idx_to_run(idx) self.assertTrue(traj is not self.traj) newtraj.v_as_run=run_name self.traj.v_as_run == run_name self.traj.v_idx = idx newtraj.v_idx = idx nameset = set((x.v_name for x in traj.f_iter_nodes(predicate=(idx,)))) self.assertTrue('run_%08d' % (idx+1) not in nameset) self.assertTrue('run_%08d' % idx in nameset) self.assertTrue(traj.v_crun == run_name) self.assertTrue(newtraj.crun.z==traj.x*traj.y,' z != x*y: %s != %s * %s' % (str(newtraj.crun.z),str(traj.x),str(traj.y))) traj = self.traj self.assertTrue(traj.v_idx == -1) self.assertTrue(traj.v_crun is None) self.assertTrue(traj.v_crun_ == pypetconstants.RUN_NAME_DUMMY) self.assertTrue(newtraj.v_idx == idx) def test_expand(self): ###Explore self.explore(self.traj) results = self.env.f_run(multiply) self.are_results_in_order(results) get_root_logger().info(results) traj = self.traj self.assertEqual(len(traj), len(list(compat.listvalues(self.explore_dict)[0]))) self.traj.f_load_skeleton() self.traj.f_load_items(self.traj.f_to_dict().keys(), only_empties=True) self.check_if_z_is_correct(traj) traj_name = self.env.v_trajectory.v_name del self.env self.env = Environment(trajectory=self.traj, log_stdout=False, log_config=get_log_config()) self.traj = self.env.v_trajectory self.traj.f_load(name=traj_name) self.expand(self.traj) results = self.env.f_run(multiply) self.are_results_in_order(results) traj = self.traj self.assertTrue(len(traj) == len(compat.listvalues(self.expand_dict)[0])+ len(compat.listvalues(self.explore_dict)[0])) self.traj.f_load_skeleton() self.traj.f_load_items(self.traj.f_to_dict().keys(), only_empties=True) self.check_if_z_is_correct(traj) newtraj = self.load_trajectory(trajectory_name=self.traj.v_name,as_new=False) self.traj.f_load_skeleton() self.traj.f_load_items(self.traj.f_to_dict().keys(), only_empties=True) self.compare_trajectories(self.traj,newtraj) def test_expand_after_reload(self): ###Explore self.explore(self.traj) results = self.env.f_run(multiply) self.are_results_in_order(results) traj = self.traj self.assertTrue(len(traj) == len(compat.listvalues(self.explore_dict)[0])) self.traj.f_load_skeleton() self.traj.f_load_items(self.traj.f_to_dict().keys(), only_empties=True) self.check_if_z_is_correct(traj) self.expand(self.traj) self.env.f_run(multiply) traj = self.traj self.assertTrue(len(traj) == len(compat.listvalues(self.expand_dict)[0])+\ len(compat.listvalues(self.explore_dict)[0])) self.traj.f_load_skeleton() self.traj.f_load_items(self.traj.f_to_dict().keys(), only_empties=True) self.check_if_z_is_correct(traj) newtraj = self.load_trajectory(trajectory_name=self.traj.v_name,as_new=False) self.traj.f_load_skeleton() self.traj.f_load_items(self.traj.f_to_dict().keys(), only_empties=True) self.compare_trajectories(self.traj,newtraj) def check_if_z_is_correct_map(self,traj, args1, args2, args3): for x, arg1, arg2, arg3 in zip(range(len(traj)), args1, args2, args3): traj.v_idx=x self.assertTrue(traj.crun.z==traj.x*traj.y+arg1+arg2+arg3,' z != x*y: %s != %s * %s' % (str(traj.crun.z),str(traj.x),str(traj.y))) traj.v_idx=-1 def check_if_z_is_correct(self,traj): traj.v_shortcuts=False for x in range(len(traj)): traj.v_idx=x z = traj.res.runs.crun.z x = traj.par.x y = traj.par.y self.assertTrue(z==x*y,' z != x*y: %s != %s * %s' % (str(z),str(x),str(y))) traj.v_idx=-1 traj.v_shortcuts=True
def main(): env = Environment(trajectory='Example_05_Euler_Integration', filename='experiments/example_05/HDF5/example_05.hdf5', file_title='Example_05_Euler_Integration', log_folder='experiments/example_05/LOGS/', comment = 'Go for Euler!') traj = env.v_trajectory trajectory_name = traj.v_name # 1st a) phase parameter addition add_parameters(traj) # 1st b) phase preparation # We will add the differential equation (well, its source code only) as a derived parameter traj.f_add_derived_parameter(FunctionParameter,'diff_eq', diff_lorenz, comment='Source code of our equation!') # We want to explore some initial conditions traj.f_explore({'initial_conditions' : [ np.array([0.01,0.01,0.01]), np.array([2.02,0.02,0.02]), np.array([42.0,4.2,0.42]) ]}) # 3 different conditions are enough for an illustrative example # 2nd phase let's run the experiment # We pass `euler_scheme` as our top-level simulation function and # the Lorenz equation 'diff_lorenz' as an additional argument env.f_run(euler_scheme, diff_lorenz) # We don't have a 3rd phase of post-processing here # 4th phase analysis. # I would recommend to do post-processing completely independent from the simulation, # but for simplicity let's do it here. # Let's assume that we start all over again and load the entire trajectory new. # Yet, there is an error within this approach, do you spot it? del traj traj = Trajectory(filename='experiments/example_05/HDF5/example_05.hdf5') # We will only fully load parameters and derived parameters. # Results will be loaded manually later on. try: # However, this will fail because our trajectory does not know how to # build the FunctionParameter. You have seen this coming, right? traj.f_load(name=trajectory_name,load_parameters=2, load_derived_parameters=2,load_results=1) except ImportError as e: print 'That did\'nt work, I am sorry. %s ' % e.message # Ok, let's try again but this time with adding our parameter to the imports traj = Trajectory(filename='experiments/example_05/HDF5/example_05.hdf5', dynamically_imported_classes=FunctionParameter) # Now it works: traj.f_load(name=trajectory_name,load_parameters=2, load_derived_parameters=2,load_results=1) #For the fun of it, let's print the source code print '\n ---------- The source code of your function ---------- \n %s' % traj.diff_eq # Let's get the exploration array: initial_conditions_exploration_array = traj.f_get('initial_conditions').f_get_range() # Now let's plot our simulated equations for the different initial conditions: # We will iterate through the run names for idx, run_name in enumerate(traj.f_get_run_names()): #Get the result of run idx from the trajectory euler_result = traj.results.f_get(run_name).euler_evolution # Now we manually need to load the result. Actually the results are not so large and we # could load them all at once. But for demonstration we do as if they were huge: traj.f_load_item(euler_result) euler_data = euler_result.data #Plot fancy 3d plot fig = plt.figure(idx) ax = fig.gca(projection='3d') x = euler_data[:,0] y = euler_data[:,1] z = euler_data[:,2] ax.plot(x, y, z, label='Initial Conditions: %s' % str(initial_conditions_exploration_array[idx])) plt.legend() plt.show() # Now we free the data again (because we assume its huuuuuuge): del euler_data euler_result.f_empty()
def main(): env = Environment(trajectory='Example_06_Euler_Integration', filename='experiments/example_06/HDF5/example_06.hdf5', file_title='Example_06_Euler_Integration', log_folder='experiments/example_06/LOGS/', comment = 'Go for Euler!') traj = env.v_trajectory # 1st a) phase parameter addition # Remember we have some control flow in the `add_parameters` function, the default parameter # set we choose is the `'diff_lorenz'` one, but we want to deviate from that and use the # `'diff_roessler'`. # In order to do that we can preset the corresponding name parameter to change the # control flow: traj.f_preset_parameter('diff_name', 'diff_roessler') # If you erase this line, you will get # again the lorenz attractor add_parameters(traj) # 1st b) phase preparation # Let's check which function we want to use if traj.diff_name=='diff_lorenz': diff_eq = diff_lorenz elif traj.diff_name=='diff_roessler': diff_eq = diff_roessler else: raise ValueError('I don\'t know what %s is.' % traj.diff_name) # And add the source code of the function as a derived parameter. traj.f_add_derived_parameter(FunctionParameter, 'diff_eq', diff_eq, comment='Source code of our equation!') # We want to explore some initial conditions traj.f_explore({'initial_conditions' : [ np.array([0.01,0.01,0.01]), np.array([2.02,0.02,0.02]), np.array([42.0,4.2,0.42]) ]}) # 3 different conditions are enough for now # 2nd phase let's run the experiment # We pass 'euler_scheme' as our top-level simulation function and # the Roessler function as an additional argument env.f_run(euler_scheme, diff_eq) # Again no post-processing # 4th phase analysis. # I would recommend to do the analysis completely independent from the simulation # but for simplicity let's do it here. # We won't reload the trajectory this time but simply update the skeleton traj.f_update_skeleton() #For the fun of it, let's print the source code print '\n ---------- The source code of your function ---------- \n %s' % traj.diff_eq # Let's get the exploration array: initial_conditions_exploration_array = traj.f_get('initial_conditions').f_get_range() # Now let's plot our simulated equations for the different initial conditions. # We will iterate through the run names for idx, run_name in enumerate(traj.f_get_run_names()): # Get the result of run idx from the trajectory euler_result = traj.results.f_get(run_name).euler_evolution # Now we manually need to load the result. Actually the results are not so large and we # could load them all at once, but for demonstration we do as if they were huge: traj.f_load_item(euler_result) euler_data = euler_result.data # Plot fancy 3d plot fig = plt.figure(idx) ax = fig.gca(projection='3d') x = euler_data[:,0] y = euler_data[:,1] z = euler_data[:,2] ax.plot(x, y, z, label='Initial Conditions: %s' % str(initial_conditions_exploration_array[idx])) plt.legend() plt.show() # Now we free the data again (because we assume its huuuuuuge): del euler_data euler_result.f_empty()