def log_contain(self, keywords): M4.exec(['sync', './vm.log']) with open('./vm.log', 'r') as fd: for line in fd: for k in keywords: if re.search(k, line.strip()): # print(k, line) return True return False
def main(options): ''' There some implementation optimizations based on the algorithm presented in the paper: Algotirhm: while not terminate(): s, w <- prio_pick(Q) fill papameter to w and execute w if length(w) > MAX: drop s, w s', w' <- generate_new_from(s) First, we set a pre-defined max value as the total number of workloads (i.e. testcases), and we group workloads into test packages. For each package, we create a new disk to manipulate. Second, the prio_pick procedure favours the longest workload, the queue Q is not necessary. Third, since checking each workload w incurs high overhead, we checking workloads in a batch. I.e., for workload w is <call1, call2> and its following workload w' is <call1, call2, call3>, there is not need to check them separately; when we execute w', w is also been executed. Fourth, kmeans incurs overhead, we use kmeans generation and random generation alternately. ''' fs = options.fs for cnt in range(config.get('NR_TEST_PACKAGE')): package = TestPackage() if options.test: result = M4.exec(['./ctrl', 'disk', fs, f'{fs}-{cnt}']) M4.print_result(result) for _ in range(config.get('NR_TESTCASE_PER_PACKAGE')): test_case = generate_test_case(package, options) # test_case = generate_concat_test_case(package, options) if options.test: M4.process_exec( ['./ctrl', 'run-case', fs, f'{fs}-{cnt}', test_case.path_], timeout=20) # M4.print_result(result) M4.process_exec(['./ctrl', 'kill']) if options.test: result = M4.exec(['rm', '-rf', f'{fs}-{cnt}'])