def main(): a1 = dict() a2 = dict() a3 = dict() a1['a2'] = a2 a2['a3'] = a3 a3['a1'] = a1 b = dict() b['b'] = b c = dict() c['c'] = 'c' import gc import types print(len(_bfs([b], gc.get_referrers))) sccs = tarjan([a1, b, c], gc.get_referrers) show_cycles(sccs, joined=True) print(sccs) del sccs gc.collect() sccs = tarjan([d, e, f]) show_cycles(sccs, joined=True) return sccs = tarjan(gc.get_objects(), gc.get_referrers) print([len(i) for i in sccs]) import objgraph objs = objgraph.at_addrs(sccs[0]) print(objgraph.typestats(objs))
def dump_objgrpah(objgraph_file): new_ids = objgraph.get_new_ids() new_ids_list = new_ids['list'] new_objs = objgraph.at_addrs(new_ids_list) objgraph.show_backrefs(new_objs, highlight=inspect.isclass, refcounts=True, filename=objgraph_file) new_ids = objgraph.get_new_ids()
def show_cycles(sccs, joined=False): import objgraph a = sccs if joined: a = [] for scc in sccs: a.extend(scc) a = [a] for scc in a: objs = objgraph.at_addrs(scc) print(objgraph.typestats(objs)) objgraph.show_backrefs(objs, max_depth=len(scc) + 5, filter=lambda x: id(x) in scc)
def main(): a1 = dict() a2 = dict() a3 = dict() a1['a2'] = a2 a2['a3'] = a3 a3['a1'] = a1 b = dict() b['b'] = b c = dict() c['c'] = 'c' import gc import types print(len( _bfs([b], gc.get_referrers ) )) sccs = tarjan([a1, b, c], gc.get_referrers) show_cycles(sccs, joined=True) print(sccs) del sccs gc.collect() sccs = tarjan([d, e, f]) show_cycles(sccs, joined=True) return sccs = tarjan(gc.get_objects(), gc.get_referrers) print([len(i) for i in sccs]) import objgraph objs = objgraph.at_addrs(sccs[0]) print(objgraph.typestats(objs))
def test_at_addrs(self): a = [0, 1, 2] new_ids = objgraph.get_new_ids(limit=0) new_lists = objgraph.at_addrs(new_ids['list']) self.assertIn(a, new_lists)
def show_memory_leaks( label=u'memory_leaks', max_console_rows=MAX_CONSOLE_ROWS, max_graphed_object_types=MAX_GRAPHED_OBJECT_TYPES, refs_depth=REFS_DEPTH, back_refs_depth=BACK_REFS_DEPTH, max_objects_per_type=MAX_OBJECTS_PER_TYPE, ignore_thresholds=None, graph_directory_path=GRAPH_DIRECTORY_PATH, memory_table_buffer=None, skip_first_graphs=True): """ Call this function to get data about memory leaks; what objects are being leaked, where did they come from, and what do they contain? The leaks are measured from the last call to ``get_new_ids()`` (which is called within this function). Some data is printed to stdout, and more details are available in graphs stored at the paths printed to stdout. Subsequent calls with the same label are indicated by an increasing index in the filename. Args: label (unicode): The start of the filename for each graph max_console_rows (int): The max number of object types for which to show data on the console max_graphed_object_types (int): The max number of object types for which to generate reference graphs refs_depth (int): Maximum depth of forward reference graphs back_refs_depth (int): Maximum depth of backward reference graphs max_objects_per_type (int): Max number of objects per type to use as starting points in the reference graphs ignore_thresholds (dict): Object type names for which table rows and graphs should not be generated if the new object count is below the corresponding number. graph_directory_path (unicode): The directory in which graph files will be created. It will be created if it doesn't already exist. memory_table_buffer (StringIO): Storage for the generated table of memory statistics. Ideally, create this before starting to count newly allocated objects. skip_first_graphs (bool): True if the first call to this function for a given label should not produce graphs (the default behavior). The first call to a given block of code often initializes an assortment of objects which aren't really leaked memory. """ if graph_directory_path is None: graph_directory_path = MemoryUsageData.graph_directory_path() if ignore_thresholds is None: ignore_thresholds = IGNORE_THRESHOLDS if memory_table_buffer is None: memory_table_buffer = StringIO() new_ids = get_new_ids(limit=max_console_rows, ignore_thresholds=ignore_thresholds, output=memory_table_buffer) memory_table_text = memory_table_buffer.getvalue() log.info('\n' + memory_table_text) if not os.path.exists(graph_directory_path): os.makedirs(graph_directory_path) label = label.replace(':', '_') index = indices[label].next() + 1 data = {'label': label, 'index': index} path = os.path.join(graph_directory_path, u'{label}_{index}.txt'.format(**data)) with open(path, 'w') as f: f.write(memory_table_text) if index == 1 and skip_first_graphs: return graphed_types = 0 sorted_by_count = sorted(new_ids.items(), key=lambda entry: len(entry[1]), reverse=True) for item in sorted_by_count: type_name = item[0] object_ids = new_ids[type_name] if not object_ids: continue objects = at_addrs(list(object_ids)[:max_objects_per_type]) data['type_name'] = type_name if back_refs_depth > 0: path = os.path.join(graph_directory_path, u'{label}_{index}_{type_name}_backrefs.dot'.format(**data)) show_backrefs(objects, max_depth=back_refs_depth, filename=path) log.info('Generated memory graph at {}'.format(path)) if refs_depth > 0: path = os.path.join(graph_directory_path, u'{label}_{index}_{type_name}_refs.dot'.format(**data)) show_refs(objects, max_depth=refs_depth, filename=path) log.info('Generated memory graph at {}'.format(path)) graphed_types += 1 if graphed_types >= max_graphed_object_types: break