def add_problemtype(self, multiple=False): Field = SelectField if multiple: Field = SelectMultipleField field = Field("Problem type", choices=[(a, a) for a in sorted(problemtypes.keys())]) setattr(self.F, "problemtype", field) self.F.argparse_fields["problemtype"] = {}
def run(self, cmdline, db): opsys = self.get_opsys_name(cmdline.opsys) release = cmdline.opsys_release if not cmdline.problemtype: self.ptypes = list(problemtypes.keys()) else: self.ptypes = cmdline.problemtype out = "" if cmdline.components: out += self.components(cmdline, db, opsys, release) out += "\n\n" if cmdline.problems: out += self.problems(cmdline, db, opsys, release) out += "\n" if cmdline.trends: out += self.trends(cmdline, db, opsys, release) if cmdline.text_overview: out += self.text_overview(cmdline, db, opsys, release) print(out.rstrip())
def run(self, cmdline, db): opsys = self.get_opsys_name(cmdline.opsys) release = cmdline.opsys_release if len(cmdline.problemtype) < 1: self.ptypes = list(problemtypes.keys()) else: self.ptypes = cmdline.problemtype out = "" if cmdline.components: out += self.components(cmdline, db, opsys, release) out += "\n\n" if cmdline.problems: out += self.problems(cmdline, db, opsys, release) out += "\n" if cmdline.trends: out += self.trends(cmdline, db, opsys, release) if cmdline.text_overview: out += self.text_overview(cmdline, db, opsys, release) print(out.rstrip())
def add_problemtype(self, multiple=False) -> None: Field = SelectField if multiple: Field = SelectMultipleField field = Field("Problem type", choices=[(a, a) for a in sorted(problemtypes.keys())]) setattr(self.F, "problemtype", field) self.F.argparse_fields["problemtype"] = {}
def run(self, cmdline, db) -> None: if not cmdline.problemtype: ptypes = list(problemtypes.keys()) else: ptypes = cmdline.problemtype for i, ptype in enumerate(ptypes, start=1): problemplugin = problemtypes[ptype] self.log_info("[{0} / {1}] Processing problem type: {2}".format( i, len(ptypes), problemplugin.nice_name)) self._find_crashfn(db, problemplugin, query_all=cmdline.all)
def run(self, cmdline, db): if cmdline.workers < 1: self.log_error("At least 1 worker is required") return 1 if not cmdline.problemtype: ptypes = list(problemtypes.keys()) else: ptypes = cmdline.problemtype for ptype in ptypes: if not ptype in problemtypes: self.log_warn( "Problem type '{0}' is not supported".format(ptype)) continue problemplugin = problemtypes[ptype] self.log_info("Processing '{0}' problem type".format( problemplugin.nice_name)) db_ssources = problemplugin.get_ssources_for_retrace( db, cmdline.max_fail_count, yield_per=cmdline.batch) if not db_ssources: continue pkgmap = self._get_pkgmap(db, problemplugin, db_ssources) # self._get_pkgmap may change paths, flush the changes db.session.flush() tasks = collections.deque() for i, (db_debug_pkg, (db_src_pkg, binpkgmap)) in enumerate(pkgmap.items(), start=1): self.log_debug("[%d / %d] Creating task for '%s'", i, len(pkgmap), db_debug_pkg.nvra()) try: tasks.append( RetraceTask(db_debug_pkg, db_src_pkg, binpkgmap, db=db)) except IncompleteTask as ex: self.log_debug(str(ex)) self.log_info("Starting the retracing process") retrace = RetracePool(db, tasks, problemplugin, cmdline.workers) retrace.run() self.log_info("All done") return 0
def run(self, cmdline, db): if len(cmdline.problemtype) < 1: ptypes = problemtypes.keys() else: ptypes = cmdline.problemtype i = 0 for ptype in ptypes: i += 1 problemplugin = problemtypes[ptype] self.log_info("[{0} / {1}] Processing problem type: {2}" .format(i, len(ptypes), problemplugin.nice_name)) self._find_crashfn(db, problemplugin, query_all=cmdline.all)
def run(self, cmdline, db): if len(cmdline.problemtype) < 1: ptypes = list(problemtypes.keys()) else: ptypes = cmdline.problemtype i = 0 for ptype in ptypes: i += 1 problemplugin = problemtypes[ptype] self.log_info("[{0} / {1}] Processing problem type: {2}" .format(i, len(ptypes), problemplugin.nice_name)) self._find_crashfn(db, problemplugin, query_all=cmdline.all)
def run(self, cmdline, db): if len(cmdline.problemtype) < 1: ptypes = problemtypes.keys() else: ptypes = cmdline.problemtype i = 0 for ptype in ptypes: i += 1 problemplugin = problemtypes[ptype] self.log_info("[{0} / {1}] Processing problem type: {2}" .format(i, len(ptypes), problemplugin.nice_name)) self._create_problems(db, problemplugin) self._remove_empty_problems(db)
def run(self, cmdline, db): if not cmdline.problemtype: ptypes = list(problemtypes.keys()) else: ptypes = cmdline.problemtype i = 0 for ptype in ptypes: i += 1 problemplugin = problemtypes[ptype] self.log_info("[{0} / {1}] Processing problem type: {2}".format( i, len(ptypes), problemplugin.nice_name)) self._create_problems(db, problemplugin, cmdline.report_min_count, cmdline.speedup) self._remove_empty_problems(db)
def run(self, cmdline, db): if not cmdline.problemtype: ptypes = list(problemtypes.keys()) else: ptypes = cmdline.problemtype self._max_workers = cmdline.max_workers ptypes_len = len(ptypes) for i, ptype in enumerate(ptypes, start=1): problemplugin = problemtypes[ptype] self.log_info("[{0} / {1}] Processing problem type: {2}".format( i, ptypes_len, problemplugin.nice_name)) self._create_problems(db, problemplugin, cmdline.report_min_count, cmdline.speedup) self._remove_empty_problems(db)
def run(self, cmdline, db): if not cmdline.problemtype: ptypes = list(problemtypes.keys()) else: ptypes = cmdline.problemtype i = 0 for ptype in ptypes: i += 1 problemplugin = problemtypes[ptype] self.log_info("[{0} / {1}] Processing problem type: {2}" .format(i, len(ptypes), problemplugin.nice_name)) self._create_problems(db, problemplugin, cmdline.report_min_count, cmdline.speedup) self._remove_empty_problems(db)
def run(self, cmdline, db): if cmdline.problemtype is None or len(cmdline.problemtype) < 1: ptypes = problemtypes.keys() else: ptypes = [] for ptype in cmdline.problemtype: if ptype not in problemtypes: self.log_warn("Problem type '{0}' is not supported" .format(ptype)) continue ptypes.append(ptype) if len(ptypes) < 1: self.log_info("Nothing to do") return 0 i = 0 for ptype in ptypes: i += 1 problemtype = problemtypes[ptype] self.log_info("[{0} / {1}] Processing problem type '{2}'" .format(i, len(ptypes), problemtype.nice_name)) db_reports = get_reports_by_type(db, ptype) j = 0 for db_report in db_reports: j += 1 self.log_info(" [{0} / {1}] Processing report #{2}" .format(j, len(db_reports), db_report.id)) hashes = set() k = 0 for db_backtrace in db_report.backtraces: k += 1 self.log_debug(" [{0} / {1}] Processing backtrace #{2}" .format(k, len(db_report.backtraces), db_backtrace.id)) try: component = db_report.component.name include_offset = ptype.lower() == "python" bthash = self._hash_backtrace(db_backtrace, hashbase=[component], offset=include_offset) self.log_debug(" {0}".format(bthash)) db_dup = get_report_by_hash(db, bthash) if db_dup is None: self.log_info(" Adding hash '{0}'" .format(bthash)) if not bthash in hashes: db_reporthash = ReportHash() db_reporthash.report = db_report db_reporthash.hash = bthash db.session.add(db_reporthash) hashes.add(bthash) elif db_dup == db_report: self.log_debug(" Hash '{0}' already assigned" .format(bthash)) else: self.log_warn((" Conflict! Skipping hash '{0}'" " (report #{1})").format(bthash, db_dup.id)) except FafError as ex: self.log_warn(" {0}".format(str(ex))) continue db.session.flush()
return None associate_select = QuerySelectField( "Associate or Group", allow_blank=True, blank_text="Associate or Group", query_factory=lambda: (db.session.query(AssociatePeople) .order_by(asc(AssociatePeople.name)) .all()), get_pk=lambda a: a.id, get_label=lambda a: a.name, default=maintainer_default) type_multiselect = SelectMultipleField( "Type", choices=[(a, a) for a in sorted(problemtypes.keys())]) solution_checkbox = BooleanField("Solution") class ProblemFilterForm(Form): opsysreleases = releases_multiselect component_names = TextField() daterange = DaterangeField( "Date range", default_days=14) associate = associate_select
UREPORT_CHECKER = DictChecker({ "os": DictChecker({ "name": StringChecker(allowed=systems.keys()), "version": StringChecker(pattern=r"^[a-zA-Z0-9_\.\-\+~]+$", maxlen=column_len(OpSysRelease, "version")), "architecture": StringChecker(pattern=r"^[a-zA-Z0-9_]+$", maxlen=column_len(Arch, "name")), # Anything else will be checked by the plugin }), # The checker for packages depends on operating system "packages": ListChecker(Checker(object)), "problem": DictChecker({ "type": StringChecker(allowed=problemtypes.keys()), # Anything else will be checked by the plugin }), "reason": StringChecker(maxlen=column_len(ReportReason, "reason")), "reporter": DictChecker({ "name": StringChecker(pattern=r"^[a-zA-Z0-9 ]+$", maxlen=64), "version": StringChecker(pattern=r"^[a-zA-Z0-9_\.\- ]+$", maxlen=64), }), "ureport_version": IntChecker(minval=0), })
def run(self, cmdline, db): if cmdline.problemtype is None or len(cmdline.problemtype) < 1: ptypes = list(problemtypes.keys()) else: ptypes = [] for ptype in cmdline.problemtype: if ptype not in problemtypes: self.log_warn( "Problem type '{0}' is not supported".format(ptype)) continue ptypes.append(ptype) if not ptypes: self.log_info("Nothing to do") return 0 i = 0 for ptype in ptypes: i += 1 problemtype = problemtypes[ptype] self.log_info("[{0} / {1}] Processing problem type '{2}'".format( i, len(ptypes), problemtype.nice_name)) db_reports = get_reports_by_type(db, ptype) j = 0 for db_report in db_reports: j += 1 self.log_info(" [{0} / {1}] Processing report #{2}".format( j, len(db_reports), db_report.id)) hashes = set() k = 0 for db_backtrace in db_report.backtraces: k += 1 self.log_debug( " [{0} / {1}] Processing backtrace #{2}".format( k, len(db_report.backtraces), db_backtrace.id)) try: component = db_report.component.name include_offset = ptype.lower() == "python" bthash = self._hash_backtrace(db_backtrace, hashbase=[component], offset=include_offset) self.log_debug(" {0}".format(bthash)) db_dup = get_report(db, bthash) if db_dup is None: self.log_info( " Adding hash '{0}'".format(bthash)) if not bthash in hashes: db_reporthash = ReportHash() db_reporthash.report = db_report db_reporthash.hash = bthash db.session.add(db_reporthash) hashes.add(bthash) elif db_dup == db_report: self.log_debug( " Hash '{0}' already assigned".format( bthash)) else: self.log_warn( (" Conflict! Skipping hash '{0}'" " (report #{1})").format(bthash, db_dup.id)) except FafError as ex: self.log_warn(" {0}".format(str(ex))) continue db.session.flush() return 0
def run(self, cmdline, db): if len(cmdline.problemtype) < 1: ptypes = problemtypes.keys() else: ptypes = cmdline.problemtype for ptype in ptypes: if not ptype in problemtypes: self.log_warn("Problem type '{0}' is not supported" .format(ptype)) continue problemplugin = problemtypes[ptype] self.log_info("Processing '{0}' problem type" .format(problemplugin.nice_name)) db_ssources = problemplugin.get_ssources_for_retrace( db, yield_per=cmdline.batch) if len(db_ssources) < 1: continue i = 0 batch = [] db_batch = [] for db_ssource in db_ssources: i += 1 self.log_info("Processing symbol {0}/{1}" .format(i, len(db_ssources))) req_data = { "build_id": db_ssource.build_id, "path": db_ssource.path, "offset": db_ssource.offset, "type": ptype, } batch.append(req_data) db_batch.append(db_ssource) if len(batch) >= cmdline.batch or i == len(db_ssources): self.log_info("Sending request...") r = requests.post( self.remote_url, data=json.dumps(batch), params={"create_symbol_auth": self.auth_key}, headers={"content-type": "application/json"} ) if r.status_code == requests.codes.ok: res_data = r.json() if len(res_data) != len(batch): self.log_warn("Response length mismatch.") batch = [] db_batch = [] continue new_db_symbols = {} for j in xrange(len(res_data)): data = res_data[j] if data.get("error", False): self.log_info(data["error"]) continue db_ssource = db_batch[j] ssource = data["SymbolSource"] symbol = data["Symbol"] db_ssource.build_id = ssource["build_id"] db_ssource.path = ssource["path"] db_ssource.offset = ssource["offset"] db_ssource.func_offset = ssource["func_offset"] db_ssource.hash = ssource["hash"] db_ssource.source_path = ssource["source_path"] db_ssource.line_number = ssource["line_number"] db_symbol = get_symbol_by_name_path(db, symbol["name"], symbol["normalized_path"]) if db_symbol is None: db_symbol = new_db_symbols.get((symbol["name"], symbol["normalized_path"]), None) if db_symbol is None: db_symbol = Symbol() db.session.add(db_symbol) new_db_symbols[(symbol["name"], symbol["normalized_path"])] = db_symbol db_symbol.name = symbol["name"] db_symbol.nice_name = symbol["nice_name"] db_symbol.normalized_path = symbol["normalized_path"] db_ssource.symbol = db_symbol self.log_info("Symbol saved.") db.session.flush() batch = [] db_batch = []
StringChecker(allowed=list(systems.keys())), "version": StringChecker(pattern=r"^[a-zA-Z0-9_\.\-\+~]+$", maxlen=column_len(OpSysRelease, "version")), "architecture": StringChecker(pattern=r"^[a-zA-Z0-9_]+$", maxlen=column_len(Arch, "name")), # Anything else will be checked by the plugin }), # The checker for packages depends on operating system "packages": ListChecker(Checker(object)), "problem": DictChecker({ "type": StringChecker(allowed=list(problemtypes.keys())), # Anything else will be checked by the plugin }), "reason": StringChecker(maxlen=column_len(ReportReason, "reason")), "reporter": DictChecker({ "name": StringChecker(pattern=r"^[a-zA-Z0-9 ]+$", maxlen=64), "version": StringChecker(pattern=r"^[a-zA-Z0-9_\.\- ]+$", maxlen=64), }), "ureport_version": IntChecker(minval=0), })
def run(self, cmdline, db): if cmdline.workers < 1: self.log_error("At least 1 worker is required") return 1 if len(cmdline.problemtype) < 1: ptypes = problemtypes.keys() else: ptypes = cmdline.problemtype for ptype in ptypes: if not ptype in problemtypes: self.log_warn("Problem type '{0}' is not supported" .format(ptype)) continue problemplugin = problemtypes[ptype] self.log_info("Processing '{0}' problem type" .format(problemplugin.nice_name)) db_ssources = problemplugin.get_ssources_for_retrace(db) if len(db_ssources) < 1: continue pkgmap = self._get_pkgmap(db, problemplugin, db_ssources) # self._get_pkgmap may change paths, flush the changes db.session.flush() tasks = [] i = 0 for db_debug_pkg, (db_src_pkg, binpkgmap) in pkgmap.items(): i += 1 self.log_debug("[{0} / {1}] Creating task for '{2}'" .format(i, len(pkgmap), db_debug_pkg.nvra())) try: tasks.append(RetraceTask(db_debug_pkg, db_src_pkg, binpkgmap, db=db)) except IncompleteTask as ex: self.log_debug(str(ex)) inqueue = collections.deque(tasks) outqueue = Queue.Queue(cmdline.workers) total = len(tasks) workers = [RetraceWorker(i, inqueue, outqueue) for i in xrange(cmdline.workers)] for worker in workers: self.log_debug("Spawning {0}".format(worker.name)) worker.start() i = 0 try: while True: wait = any(w.is_alive() for w in workers) try: task = outqueue.get(wait, 1) except Queue.Empty: if any(w.is_alive() for w in workers): continue self.log_info("All done") break i += 1 self.log_info("[{0} / {1}] Retracing {2}" .format(i, total, task.debuginfo.nvra)) problemplugin.retrace(db, task) db.session.flush() outqueue.task_done() except: for worker in workers: worker.stop = True raise
def run(self, cmdline, db): if cmdline.workers < 1: self.log_error("At least 1 worker is required") return 1 if not cmdline.problemtype: ptypes = list(problemtypes.keys()) else: ptypes = cmdline.problemtype for ptype in ptypes: if not ptype in problemtypes: self.log_warn( "Problem type '{0}' is not supported".format(ptype)) continue problemplugin = problemtypes[ptype] self.log_info("Processing '{0}' problem type".format( problemplugin.nice_name)) db_ssources = problemplugin.get_ssources_for_retrace( db, cmdline.max_fail_count, yield_per=cmdline.batch) if not db_ssources: continue pkgmap = self._get_pkgmap(db, problemplugin, db_ssources) # self._get_pkgmap may change paths, flush the changes db.session.flush() tasks = [] i = 0 for db_debug_pkg, (db_src_pkg, binpkgmap) in pkgmap.items(): i += 1 self.log_debug("[{0} / {1}] Creating task for '{2}'".format( i, len(pkgmap), db_debug_pkg.nvra())) try: tasks.append( RetraceTask(db_debug_pkg, db_src_pkg, binpkgmap, db=db)) except IncompleteTask as ex: self.log_debug(str(ex)) inqueue = collections.deque(tasks) outqueue = queue.Queue(cmdline.workers) total = len(tasks) workers = [ RetraceWorker(i, inqueue, outqueue) for i in range(cmdline.workers) ] for worker in workers: self.log_debug("Spawning {0}".format(worker.name)) worker.start() i = 0 try: while True: wait = any(w.is_alive() for w in workers) try: task = outqueue.get(wait, 1) except queue.Empty: if any(w.is_alive() for w in workers): continue self.log_info("All done") break i += 1 self.log_info("[{0} / {1}] Retracing {2}".format( i, total, task.debuginfo.nvra)) problemplugin.retrace(db, task) db.session.flush() outqueue.task_done() except: for worker in workers: worker.stop = True raise
associate_select = QuerySelectField( "Associate or Group", allow_blank=True, blank_text="Associate or Group", query_factory=lambda: (db.session.query(AssociatePeople).order_by( asc(AssociatePeople.name)).all()), get_pk=lambda a: a.id, get_label=lambda a: a.name, default=maintainer_default) type_multiselect = SelectMultipleField("Type", choices=[ (a, a) for a in sorted(problemtypes.keys()) ]) solution_checkbox = BooleanField("Solution") class ProblemFilterForm(Form): opsysreleases = releases_multiselect component_names = TextField() daterange = DaterangeField("Date range", default_days=14) associate = associate_select arch = arch_multiselect
def run(self, cmdline, db): if len(cmdline.problemtype) < 1: ptypes = problemtypes.keys() else: ptypes = cmdline.problemtype for ptype in ptypes: if not ptype in problemtypes: self.log_warn( "Problem type '{0}' is not supported".format(ptype)) continue problemplugin = problemtypes[ptype] self.log_info("Processing '{0}' problem type".format( problemplugin.nice_name)) db_ssources = problemplugin.get_ssources_for_retrace( db, yield_per=cmdline.batch) if len(db_ssources) < 1: continue i = 0 batch = [] db_batch = [] for db_ssource in db_ssources: i += 1 self.log_info("Processing symbol {0}/{1}".format( i, len(db_ssources))) req_data = { "build_id": db_ssource.build_id, "path": db_ssource.path, "offset": db_ssource.offset, "type": ptype, } batch.append(req_data) db_batch.append(db_ssource) if len(batch) >= cmdline.batch or i == len(db_ssources): self.log_info("Sending request...") r = requests.post( self.remote_url, data=json.dumps(batch), params={"create_symbol_auth": self.auth_key}, headers={"content-type": "application/json"}) if r.status_code == requests.codes.ok: res_data = r.json() if len(res_data) != len(batch): self.log_warn("Response length mismatch.") batch = [] db_batch = [] continue new_db_symbols = {} for j in xrange(len(res_data)): data = res_data[j] if data.get("error", False): self.log_info(data["error"]) continue db_ssource = db_batch[j] ssource = data["SymbolSource"] symbol = data["Symbol"] db_ssource.build_id = ssource["build_id"] db_ssource.path = ssource["path"] db_ssource.offset = ssource["offset"] db_ssource.func_offset = ssource["func_offset"] db_ssource.hash = ssource["hash"] db_ssource.source_path = ssource["source_path"] db_ssource.line_number = ssource["line_number"] db_symbol = get_symbol_by_name_path( db, symbol["name"], symbol["normalized_path"]) if db_symbol is None: db_symbol = new_db_symbols.get( (symbol["name"], symbol["normalized_path"]), None) if db_symbol is None: db_symbol = Symbol() db.session.add(db_symbol) new_db_symbols[( symbol["name"], symbol["normalized_path"])] = db_symbol db_symbol.name = symbol["name"] db_symbol.nice_name = symbol["nice_name"] db_symbol.normalized_path = symbol[ "normalized_path"] db_ssource.symbol = db_symbol self.log_info("Symbol saved.") db.session.flush() batch = [] db_batch = []