def run(self): super(BulkExtractor, self).run() if self.args is None: return if not HAVE_BULK_EXTRACTOR: self.log('error',"Missing dependency, install bulk_extractor with hashdb") self.log('info',"https://github.com/simsong/bulk_extractor") if not __sessions__.is_set(): self.log('error',"No session opened") if __sessions__.is_set(): if self.args.scan: self.scan() elif self.args.email: self.email() elif self.args.ip: self.ip() elif self.args.domain: self.domain() elif self.args.blocks: self.blocks() elif self.args.view: self.view() elif self.args.list: self.list() else: self.log('error','At least one of the parameters is required') self.usage()
def autorun_module(file_hash): if not file_hash: return # We need an open session if not __sessions__.is_set(): # Open session __sessions__.new(get_sample_path(file_hash)) for cmd_line in cfg.autorun.commands.split(','): split_commands = cmd_line.split(';') for split_command in split_commands: split_command = split_command.strip() if not split_command: continue root, args = parse(split_command) try: if root in __modules__: module = __modules__[root]['obj']() module.set_commandline(args) module.run() print_info("Running Command {0}".format(split_command)) if cfg.modules.store_output and __sessions__.is_set(): Database().add_analysis(file_hash, split_command, module.output) if cfg.autorun.verbose: print_output(module.output) del(module.output[:]) else: print_error('{0} is not a valid command. Please check your viper.conf file.'.format(cmd_line)) except: print_error('Viper was unable to complete the command {0}'.format(cmd_line)) return
def pehash(self): if not HAVE_PEHASH: self.log('error', "PEhash is missing. Please copy PEhash to the modules directory of Viper") return current_pehash = None if __sessions__.is_set(): current_pehash = calculate_pehash(__sessions__.current.file.path) self.log('info', "PEhash: {0}".format(bold(current_pehash))) if self.args.all or self.args.cluster or self.args.scan: db = Database() samples = db.find(key='all') rows = [] for sample in samples: sample_path = get_sample_path(sample.sha256) pe_hash = calculate_pehash(sample_path) if pe_hash: rows.append((sample.name, sample.md5, pe_hash)) if self.args.all: self.log('info', "PEhash for all files:") header = ['Name', 'MD5', 'PEhash'] self.log('table', dict(header=header, rows=rows)) elif self.args.cluster: self.log('info', "Clustering files by PEhash...") cluster = {} for sample_name, sample_md5, pe_hash in rows: cluster.setdefault(pe_hash, []).append([sample_name, sample_md5]) for item in cluster.items(): if len(item[1]) > 1: self.log('info', "PEhash cluster {0}:".format(bold(item[0]))) self.log('table', dict(header=['Name', 'MD5'], rows=item[1])) elif self.args.scan: if __sessions__.is_set() and current_pehash: self.log('info', "Finding matching samples...") matches = [] for row in rows: if row[1] == __sessions__.current.file.md5: continue if row[2] == current_pehash: matches.append([row[0], row[1]]) if matches: self.log('table', dict(header=['Name', 'MD5'], rows=matches)) else: self.log('info', "No matches found")
def cmd_delete(self, *args): parser = argparse.ArgumentParser(prog='delete', description="Delete a file") parser.add_argument('-a', '--all', action='store_true', help="Delete ALL files in this project") parser.add_argument('-f', '--find', action="store_true", help="Delete ALL files from last find") try: args = parser.parse_args(args) except: return while True: choice = input("Are you sure? It can't be reverted! [y/n] ") if choice == 'y': break elif choice == 'n': return if args.all: if __sessions__.is_set(): __sessions__.close() samples = self.db.find('all') for sample in samples: self.db.delete_file(sample.id) os.remove(get_sample_path(sample.sha256)) self.log('info', "Deleted a total of {} files.".format(len(samples))) elif args.find: if __sessions__.find: samples = __sessions__.find for sample in samples: self.db.delete_file(sample.id) os.remove(get_sample_path(sample.sha256)) self.log('info', "Deleted {} files.".format(len(samples))) else: self.log('error', "No find result") else: if __sessions__.is_set(): rows = self.db.find('sha256', __sessions__.current.file.sha256) if rows: malware_id = rows[0].id if self.db.delete_file(malware_id): self.log("success", "File deleted") else: self.log('error', "Unable to delete file") os.remove(__sessions__.current.file.path) __sessions__.close() self.log('info', "Deleted opened file.") else: self.log('error', "No session open, and no --all argument. Nothing to delete.")
def run(self, *args): try: args = self.parser.parse_args(args) except SystemExit: return while True: choice = input("Are you sure? It can't be reverted! [y/n] ") if choice == 'y': break elif choice == 'n': return db = Database() if args.all: if __sessions__.is_set(): __sessions__.close() samples = db.find('all') for sample in samples: db.delete_file(sample.id) os.remove(get_sample_path(sample.sha256)) self.log('info', "Deleted a total of {} files.".format(len(samples))) elif args.find: if __sessions__.find: samples = __sessions__.find for sample in samples: db.delete_file(sample.id) os.remove(get_sample_path(sample.sha256)) self.log('info', "Deleted {} files.".format(len(samples))) else: self.log('error', "No find result") else: if __sessions__.is_set(): rows = db.find('sha256', __sessions__.current.file.sha256) if rows: malware_id = rows[0].id if db.delete_file(malware_id): self.log("success", "File deleted") else: self.log('error', "Unable to delete file") os.remove(__sessions__.current.file.path) __sessions__.close() self.log('info', "Deleted opened file.") else: self.log('error', "No session open, and no --all argument. Nothing to delete.")
def run(self): super(b64dec, self).run() if not __sessions__.is_set(): self.log('error', "No open session") return regexp = re.compile(ur'(?:[\x20-\x7E][\x00]){3,}') if os.path.exists(__sessions__.current.file.path): strings = [w.decode('utf-16le') for w in regexp.findall(__sessions__.current.file.data)] for w in strings: if BASE64_REGEX.search(w): match = BASE64_REGEX.search(w) try: decstr = base64.b64decode(match.group(0)).decode('ascii') # self.log('info', 'base64 string found: %s' % (match.group(0))) self.log('info', 'decoded string: %s' % decstr) except: pass regexp = '[\x20\x30-\x39\x41-\x5a\x61-\x7a\-\.:\=]{4,}' strings = re.findall(regexp, __sessions__.current.file.data) for w in strings: if BASE64_REGEX.search(w): match = BASE64_REGEX.search(w) try: decstr = base64.b64decode(match.group(0)).decode('ascii') # self.log('info', 'base64 string found: %s' % (match.group(0))) self.log('info', 'decoded string: %s' % decstr) except: pass else: self.log('error', 'No matches found')
def module_cmdline(cmd_line, file_hash): html = "" cmd = Commands() split_commands = cmd_line.split(';') for split_command in split_commands: split_command = split_command.strip() if not split_command: continue root, args = parse(split_command) try: if root in cmd.commands: cmd.commands[root]['obj'](*args) html += print_output(cmd.output) del (cmd.output[:]) elif root in __modules__: # if prev commands did not open a session open one on the current file if file_hash: path = get_sample_path(file_hash) __sessions__.new(path) module = __modules__[root]['obj']() module.set_commandline(args) module.run() html += print_output(module.output) if cfg.modules.store_output and __sessions__.is_set(): Database().add_analysis(file_hash, split_command, module.output) del (module.output[:]) else: html += '<p class="text-danger">{0} is not a valid command</p>'.format(cmd_line) except Exception as e: html += '<p class="text-danger">We were unable to complete the command {0}</p>'.format(cmd_line) __sessions__.close() return html
def run(self): if not HAVE_REQUESTS and not HAVE_BS4: self.log('error', "Missing dependencies (`pip install requests beautifulsoup4`)") return if not __sessions__.is_set(): self.log('error', "No session opened") return if len(self.args) == 0: self.help() return if self.args[0] == 'help': self.help() elif self.args[0] == 'malwr': self.malwr() elif self.args[0] == 'anubis': self.anubis() elif self.args[0] == 'threat': self.threat() elif self.args[0] == 'joe': self.joe() elif self.args[0] == 'meta': self.meta()
def run(self): super(Cuckoo, self).run() if self.args is None: return if not __sessions__.is_set(): self.log('error', "No session opened") return if not HAVE_REQUESTS: self.log('error', "Missing dependency, install requests (`pip install requests`)") return host = self.args.host port = self.args.port url = 'http://{0}:{1}/tasks/create/file'.format(host, port) files = dict(file=open(__sessions__.current.file.path, 'rb')) try: response = requests.post(url, files=files) except requests.ConnectionError: self.log('error', "Unable to connect to Cuckoo API at {0}:{1}".format(host, port)) return
def run(self): if not __sessions__.is_set(): print_error("No session opened") return data = urllib.urlencode({'resource' : __sessions__.current.file.md5, 'apikey' : KEY}) try: request = urllib2.Request(VIRUSTOTAL_URL, data) response = urllib2.urlopen(request) response_data = response.read() except Exception as e: print_error("Failed: {0}".format(e)) return try: virustotal = json.loads(response_data) except ValueError as e: print_error("Failed: {0}".format(e)) rows = [] if 'scans' in virustotal: for engine, signature in virustotal['scans'].items(): if signature['detected']: signature = signature['result'] else: signature = '' rows.append([engine, signature]) print(table(['Antivirus', 'Signature'], rows))
def run(self): super(ViperMetaScan, self).run() if self.ms.was_api_error: return if self.args: if self.args.workflow: if isinstance(self.args.workflow, list): self.ms.workflow = self.dequote(' '.join(self.args.workflow)) else: self.ms.workflow = self.args.workflow if self.args.engines: self.ms.show_engines() elif self.args.license: self.ms.show_license() elif self.args.listworkflows: self.ms.show_workflows() elif self.args.find: if not __sessions__.find: self.log('error', "No find result") return self.ms.files = self.get_files_from_last_find(__sessions__) else: if not __sessions__.is_set(): self.log('error', "No session opened") return self.ms.files = self.get_file_from_current_session(__sessions__) if self.ms.files: summary = self.ms.show_analyzed_info() self.ms.show_summary(summary)
def run(self, *args): try: args = self.parser.parse_args(args) except SystemExit: return if not __sessions__.is_set(): self.log('error', "No open session") return if not __project__.name: src_project = "default" else: src_project = __project__.name db.copied_id_sha256 = [] res = db.copy(__sessions__.current.file.id, src_project=src_project, dst_project=args.project, copy_analysis=True, copy_notes=True, copy_tags=True, copy_children=args.children) if args.delete: __sessions__.close() for item_id, item_sha256 in db.copied_id_sha256: db.delete_file(item_id) os.remove(get_sample_path(item_sha256)) self.log('info', "Deleted: {}".format(item_sha256)) if res: self.log('success', "Successfully copied sample(s)") return True else: self.log('error', "Something went wrong") return False
def run(self, *args): try: args = self.parser.parse_args(args) except SystemExit: return if __sessions__.is_set(): self.log('table', dict( header=['Key', 'Value'], rows=[ ['Name', __sessions__.current.file.name], ['Tags', __sessions__.current.file.tags], ['Path', __sessions__.current.file.path], ['Size', __sessions__.current.file.size], ['Type', __sessions__.current.file.type], ['Mime', __sessions__.current.file.mime], ['MD5', __sessions__.current.file.md5], ['SHA1', __sessions__.current.file.sha1], ['SHA256', __sessions__.current.file.sha256], ['SHA512', __sessions__.current.file.sha512], ['SSdeep', __sessions__.current.file.ssdeep], ['CRC32', __sessions__.current.file.crc32], ['Parent', __sessions__.current.file.parent], ['Children', __sessions__.current.file.children] ] ))
def run(self): super(JoeSandbox, self).run() if not cfg.joesandbox: self.log("error", 'The JoeSandbox module cannot be used unless the configuration is defined.') return self.joe = jbxapi.JoeSandbox(apiurl=cfg.joesandbox.apiurl, apikey=cfg.joesandbox.apikey, accept_tac=cfg.joesandbox.accept_tac, verify_ssl=cfg.joesandbox.verify) if self.args is None: return if not __sessions__.is_set(): self.log('error', "No open session.") return try: if self.args.submit: self.submit() elif self.args.tasks: self.tasks() elif self.args.dropped: self.dropped() elif self.args.clear: self.clear() elif self.args.report: self.report() except jbxapi.JoeException as e: self.log("error", e)
def auto(self): if not HAVE_YARA: self.log('error', "Missing dependency, install yara (see http://plusvic.github.io/yara/)") return if not __sessions__.is_set(): self.log('error', "No open session") return rules_paths = [ '/usr/share/viper/yara/rats.yara', os.path.join(VIPER_ROOT, 'data/yara/rats.yara') ] rules_path = None for cur_path in rules_paths: if os.path.exists(cur_path): rules_path = cur_path break rules = yara.compile(rules_path) for match in rules.match(__sessions__.current.file.path): if 'family' in match.meta: self.log('info', "Automatically detected supported RAT {0}".format(match.rule)) self.get_config(match.meta['family']) return self.log('info', "No known RAT detected")
def run(self): super(vBin, self).run() if self.args is None: return if not HAVE_PYIDB: self.log('error', "Missing dependancy, install python-idb") return if not __sessions__.is_set(): self.log('error', "No open session") return current_file = __sessions__.current.file.path current_dir = self.get_current_file_dir(current_file) current_idb = self.get_current_idb_path(current_dir) if not os.path.exists(current_idb): current_idb = self.get_current_idb_path64(current_dir) # Loading IDB db = self.get_db(current_idb) if self.args.subname == "functions": self.list_functions(db) elif self.args.subname == "disass": func_name = self.args.function self.disass(db, func_name) elif self.args.subname == "calls": func_name = self.args.function self.show_calls(db, func_name) else: self.log('error', 'At least one of the parameters is required') self.usage()
def run(self): super(Exif, self).run() if self.args is None: return if not __sessions__.is_set(): self.log('error', "No open session. This command expects a file to be open.") return if not HAVE_EXIF: self.log('error', "Missing dependency, install pyexiftool") return try: with exiftool.ExifTool() as et: metadata = et.get_metadata(__sessions__.current.file.path) except OSError: self.log('error', "Exiftool is not installed") return rows = [] for key, value in metadata.items(): rows.append([key, value]) rows = sorted(rows, key=lambda entry: entry[0]) self.log('info', "MetaData:") self.log('table', dict(header=['Key', 'Value'], rows=rows))
def get_config(self, family): if not __sessions__.is_set(): self.log('error', "No open session") return try: module = importlib.import_module('viper.modules.rats.{0}'.format(family)) except ImportError: self.log('error', "There is no module for family {0}".format(bold(family))) return try: config = module.config(__sessions__.current.file.data) except: config = None if not config: self.log('error', "No Configuration Detected") return rows = [] for key, value in config.items(): rows.append([key, value]) rows = sorted(rows, key=lambda entry: entry[0]) self.log('info', "Configuration:") self.log('table', dict(header=['Key', 'Value'], rows=rows))
def run(self): super(PDF, self).run() if self.args is None: return if not __sessions__.is_set(): self.log('error', "No open session. This command expects a file to be open.") return False if 'PDF' not in __sessions__.current.file.type: # A file with '%PDF' signature inside first 1024 bytes is a valid # PDF file. magic lib doesn't detect it if there is an offset header = __sessions__.current.file.data[:1024] if b'%PDF' not in header: self.log('error', "The opened file doesn't appear to be a PDF document") return if self.args.subname == 'id': self.pdf_id() elif self.args.subname == 'streams': self.streams() else: self.log('error', 'At least one of the parameters is required') self.usage()
def upload(self): if not __sessions__.is_set(): self.log('error', "No session opened") return False categ = self.categories.get(self.args.categ) if self.args.info is not None: info = ' '.join(self.args.info) else: info = None # No need to check the output: is the event_id is none, we create a new one. event_id = self._get_eventid(True) try: result = self.misp.upload_sample(__sessions__.current.file.name, __sessions__.current.file.path, event_id, self.args.distrib, self.args.ids, categ, info, self.args.analysis, self.args.threat) except Exception as e: self.log('error', e) return if not self._has_error_message(result): self.log('success', "File uploaded sucessfully") if event_id is None: event_id = result['id'] full_event = self.misp.get(event_id) if not self._has_error_message(full_event): return __sessions__.new(misp_event=MispEvent(full_event))
def run(self): super(Reports, self).run() if self.args is None: return if not HAVE_REQUESTS and not HAVE_BS4: self.log('error', "Missing dependencies (`pip install requests beautifulsoup4`)") return if not __sessions__.is_set(): self.log('error', "No session opened") return if self.args.malwr: self.malwr() elif self.args.anubis: self.anubis() elif self.args.threat: self.threat() elif self.args.joe: self.joe() elif self.args.meta: self.meta() else: self.log('error', 'At least one of the parameters is required') self.usage()
def upload(self): if not __sessions__.is_set(): self.log("error", "No session opened") return False categ = self.categories.get(self.args.categ) out = self.misp.upload_sample( __sessions__.current.file.name, __sessions__.current.file.path, self.args.event, self.args.distrib, self.args.ids, categ, self.args.info, self.args.analysis, self.args.threat, ) result = out.json() if out.status_code == 200: if result.get("errors") is not None: self.log("error", result.get("errors")[0]["error"]["value"][0]) else: self.log("success", "File uploaded sucessfully") else: self.log("error", result.get("message"))
def run(self): super(PEBL, self).run() if self.args is None: return if not __sessions__.is_set(): self.log('error', "No session opened") return if not self.pe: try: self.pe = pefile.PE(__sessions__.current.file.path) except pefile.PEFormatError as e: self.log('error', "Unable to parse PE file: {0}".format(e)) return False if hasattr(self.pe, 'DIRECTORY_ENTRY_IMPORT'): pestudio_fct = '/home/mrrobot/viper/modules/functions.xml' for entry in self.pe.DIRECTORY_ENTRY_IMPORT: try: self.log('info', "DLL: {0}".format(entry.dll)) for symbol in entry.imports: self.log('item', "{0}: {1}".format(hex(symbol.address), symbol.name)) searchstr1 = 'bl="1" ad="1">' + symbol.name + '</fct>' searchstr2 = 'bl="1" ad="0">' + symbol.name + '</fct>' if searchstr1 in open(pestudio_fct).read(): self.log('warning', " BLACKLISTED FUNCTION!") if searchstr2 in open(pestudio_fct).read(): self.log('warning', " BLACKLISTED FUNCTION!") except: continue
def upload(self): if not __sessions__.is_set(): self.log('error', "No session opened") return False categ = self.categories.get(self.args.categ) if self.args.info is not None: info = ' '.join(self.args.info) else: info = None if __sessions__.current.misp_event and self.args.event is None: event = __sessions__.current.misp_event.event_id else: event = None try: out = self.misp.upload_sample(__sessions__.current.file.name, __sessions__.current.file.path, event, self.args.distrib, self.args.ids, categ, info, self.args.analysis, self.args.threat) except Exception as e: self.log('error', e) return result = out.json() if out.status_code == 200: if result.get('errors') is not None: self.log('error', result.get('errors')[0]['error']['value'][0]) else: if event is not None: full_event = self.misp.get_event(event) return __sessions__.new(misp_event=MispEvent(full_event.json())) # TODO: also open a session when upload_sample created a new event # (the response doesn't contain the event ID) # __sessions__.new(misp_event=MispEvent(result)) self.log('success', "File uploaded sucessfully") else: self.log('error', result.get('message'))
def run(self): super(Strings, self).run() if self.args is None: return if not (self.args.all or self.args.files or self.args.hosts or self.args.network or self.args.interesting): self.log('error', 'At least one of the parameters is required') self.usage() return if self.args.scan: db = Database() samples = db.find(key='all') for sample in samples: sample_path = get_sample_path(sample.sha256) strings = self.get_strings(File(sample_path)) self.process_strings(strings, sample.name) else: if not __sessions__.is_set(): self.log('error', "No open session") return if os.path.exists(__sessions__.current.file.path): strings = self.get_strings(__sessions__.current.file) self.process_strings(strings)
def run(self): super(xforce, self).run() # Get our keys self.key = cfg.xforce.xforce_key if self.key is None: self.log('error', 'This command requires you configure your key and password in the conf file') return self.password = cfg.xforce.xforce_password if self.password is None: self.log('error', 'This command requires you configure your key and password in the conf file') return # Check our session if not __sessions__.is_set(): self.log('error', "No open session") return # Get our md5 if os.path.exists(__sessions__.current.file.path): filehash = __sessions__.current.file.md5 # Query xforce try: url = "https://api.xforce.ibmcloud.com/malware/" + filehash token = base64.b64encode(self.key + ":" + self.password) headers = {'Authorization': "Basic " + token, 'Accept': 'application/json'} response = requests.get(url, params='', headers=headers, timeout=20) all_json = response.json() results = json.dumps(all_json, indent=4, sort_keys=True) self.log('info', 'XForce Results: %s' % (results)) return except: self.log('error', 'Issues calling XForce') return else: self.log('error', 'No file found') return
def run(self): super(Shellcode, self).run() if self.args is None: return if not __sessions__.is_set(): self.log("error", "No session opened") return collection = [ { "description": "FS:[30h] shellcode", "patterns": [ b"\x64\xa1\x30\x00|\x64\x8b\x0d\x30|\x64\x8b\x0d\x30|\x64\x8b\x15\x30|\x64\x8b\x35\x30|\x64\x8b\x3d\x30|\x6a\x30.\x64\x8b|\x33..\xb3\x64\x8b", "64a13000|648b0d30|648b0d30|648b1530|648b3530|648b3d30|6a30..648b|33....b3648b", ], }, { "description": "FS:[00h] shellcode", "patterns": [ b"\x64\x8b\x1d|\x64\xa1\x00|\x64\x8b\x0d|\x64\x8b\x15|\x64\x8b\x35|\x64\x8b\x3d", "648b1d00|64a10000|648b0d00|648b1500|648b3500|648b3d00", ], }, { "description": "API hashing", "patterns": [b"\x74.\xc1.\x0d\x03|\x74.\xc1.\x07\x03", "74..c1..0d03|74..c1..0703"], }, {"description": "PUSH DWORD[]/CALL[]", "patterns": [b"\x00\xff\x75\x00\xff\x55", "00ff7500ff55"]}, { "description": "FLDZ/FSTENV [esp-12]", "patterns": [b"\x00\xd9\x00\xee\x00\xd9\x74\x24\x00\xf4\x00\x00", "00d900ee00d9742400f40000"], }, { "description": "CALL next/POP", "patterns": [ b"\x00\xe8\x00\x00\x00\x00(\x58|\x59|\x5a|\x5b|\x5e|\x5f|\x5d)\x00\x00", "00e800000000(58|59|5a|5b|5e|5f|5d)0000", ], }, { "description": "Function prolog", "patterns": [ b"\x55\x8b\x00\xec\x83\x00\xc4|\x55\x8b\x0ec\x81\x00\xec|\x55\x8b\x00\xec\x8b|\x55\x8b\x00\xec\xe8|\x55\x8b\x00\xec\xe9", "558b00ec8300c4|558b0ec8100ec|558b00ec8b|558b00ece8|558b00ece9", ], }, ] self.log("info", "Searching for known shellcode patterns...") for entry in collection: for pattern in entry["patterns"]: match = re.search(pattern, __sessions__.current.file.data) if match: offset = match.start() self.log("info", "{0} pattern matched at offset {1}".format(entry["description"], offset)) self.log("", cyan(hexdump(__sessions__.current.file.data[offset:], maxlines=15)))
def run(self): if not __sessions__.is_set(): self.log('error', "No session opened") return def usage(): self.log('', "usage: r2 [-h] [-s]") def help(): usage() self.log('', "") self.log('', "Options:") self.log('', "\t--help (-h)\tShow this help message") self.log('', "\t--webserver (-w)\tStart web-frontend for radare2") self.log('', "") try: opts, argv = getopt.getopt(self.args[0:], 'hw', ['help', 'webserver']) except getopt.GetoptError as e: self.log('', e) return for opt, value in opts: if opt in ('-h', '--help'): help() return elif opt in ('-w', '--webserver'): self.server = "-c=H " filetype = __sessions__.current.file.type if 'x86-64' in filetype: self.is_64b = True arch = '64' if self.is_64b else '32' if 'DLL' in filetype: self.ext = '.dll' to_print = [arch, 'bit DLL (Windows)'] if "native" in filetype: to_print.append('perhaps a driver (.sys)') self.log('info', ' '.join(to_print)) elif 'PE32' in filetype: self.ext = '.exe' self.log('info', ' '.join([arch, 'bit executable (Windows)'])) elif 'shared object' in filetype: self.ext = '.so' self.log('info', ' '.join([arch, 'bit shared object (linux)'])) elif 'ELF' in filetype: self.ext = '' self.log('info', ' '.join([arch, 'bit executable (linux)'])) else: self.log('error', "Unknown binary") try: self.open_radare(__sessions__.current.file.path) except: self.log('error', "Unable to start Radare2")
def run(self): if not __sessions__.is_set(): print_error("No session opened") return def usage(): print("usage: r2 [-h] [-s]") def help(): usage() print("") print("Options:") print("\t--help (-h)\tShow this help message") print("\t--webserver (-w)\tStart web-frontend for radare2") print("") try: opts, argv = getopt.getopt(self.args[0:], "hw", ["help", "webserver"]) except getopt.GetoptError as e: print(e) return for opt, value in opts: if opt in ("-h", "--help"): help() return elif opt in ("-w", "--webserver"): self.server = "-c=H " filetype = __sessions__.current.file.type if "x86-64" in filetype: self.is_64b = True arch = "64" if self.is_64b else "32" if "DLL" in filetype: self.ext = ".dll" to_print = [arch, "bit DLL (Windows)"] if "native" in filetype: to_print.append("perhaps a driver (.sys)") print_info(" ".join(to_print)) elif "PE32" in filetype: self.ext = ".exe" print_info(" ".join([arch, "bit executable (Windows)"])) elif "shared object" in filetype: self.ext = ".so" print_info(" ".join([arch, "bit shared object (linux)"])) elif "ELF" in filetype: self.ext = "" print_info(" ".join([arch, "bit executable (linux)"])) else: print_error("Unknown binary") try: self.open_radare(__sessions__.current.file.path) except: print_error("Unable to start Radare2")
def run(self): if not __sessions__.is_set(): print_error("No session opened") return if len(self.args) != 0: self.help() return self.edit()
def run(self, *args): try: args = self.parser.parse_args(args) except SystemExit: return if not __sessions__.is_set(): self.log( 'error', "No open session. This command expects a file to be open.") return if not __project__.name: src_project = "default" else: src_project = __project__.name db = Database() db.copied_id_sha256 = [] res = db.copy(__sessions__.current.file.id, src_project=src_project, dst_project=args.project, copy_analysis=True, copy_notes=True, copy_tags=True, copy_children=args.children) if args.delete: __sessions__.close() for item_id, item_sha256 in db.copied_id_sha256: db.delete_file(item_id) os.remove(get_sample_path(item_sha256)) self.log('info', "Deleted: {}".format(item_sha256)) if res: self.log('success', "Successfully copied sample(s)") return True else: self.log('error', "Something went wrong") return False
def run(self): super(Radare, self).run() if self.args is None: return r2command = ' '.join(self.args.command) if not __sessions__.is_set(): if os.path.isfile(r2command): __sessions__.new(r2command) self.open_radare() return else: self.log( 'error', "No open session. This command expects a file to be open.") return if not r2command: self.open_radare() else: self.command(r2command)
def run(self): super(ClamAV, self).run() if self.args is None: return if not HAVE_CLAMD: self.log( 'error', "Missing dependency, install pyclamd (`pip install pyclamd`)") return if not self.Connect(): self.log('error', 'Daemon is not responding!') return if self.args.all: self.ScanAll() elif __sessions__.is_set(): self.ScanFile(__sessions__.current.file) else: self.log('error', 'No open session')
def module_text(cmd_string, file_hash): # A lot of commands rely on an open session # open a session on the file hash path = get_sample_path(file_hash) __sessions__.new(path) # Run the Module with args if __sessions__.is_set(): root, args = parse(cmd_string) if root in __modules__: module = __modules__[root]['obj']() module.set_commandline(args) module.run() html = print_output(module.output) del(module.output[:]) else: html = '<p class="text-danger">{0} is not a valid command</p>'.format(cmd_string) else: '<p class="text-danger">! There is no open session</p>' # close the session __sessions__.close() return html
def cmd_info(self, *args): if __sessions__.is_set(): self.log('table', dict( header=['Key', 'Value'], rows=[ ['Name', __sessions__.current.file.name], ['Tags', __sessions__.current.file.tags], ['Path', __sessions__.current.file.path], ['Size', __sessions__.current.file.size], ['Type', __sessions__.current.file.type], ['Mime', __sessions__.current.file.mime], ['MD5', __sessions__.current.file.md5], ['SHA1', __sessions__.current.file.sha1], ['SHA256', __sessions__.current.file.sha256], ['SHA512', __sessions__.current.file.sha512], ['SSdeep', __sessions__.current.file.ssdeep], ['CRC32', __sessions__.current.file.crc32], ['Parent', __sessions__.current.file.parent], ['Children', __sessions__.current.file.children] ] ))
def cmd_delete(self, *args): if __sessions__.is_set(): while True: choice = raw_input("Are you sure you want to delete this binary? Can't be reverted! [y/n] ") if choice == 'y': break elif choice == 'n': return rows = self.db.find('sha256', __sessions__.current.file.sha256) if rows: malware_id = rows[0].id if self.db.delete_file(malware_id): self.log("success", "File deleted") else: self.log('error', "Unable to delete file") os.remove(__sessions__.current.file.path) __sessions__.close() else: self.log('error', "No session opened")
def run(self): super(PDF, self).run() if self.args is None: return if not __sessions__.is_set(): self.log('error', "No session opened") return False if 'PDF' not in __sessions__.current.file.type: self.log('error', "The opened file doesn't appear to be a PDF document") return if self.args.subname == 'id': self.pdf_id() elif self.args.subname == 'streams': self.streams() else: self.log('error', 'At least one of the parameters is required') self.usage()
def run(self): super(Radare, self).run() if self.args is None: return if not __sessions__.is_set(): self.log('error', "No session opened") return if self.args.webserver: self.server = "-c=H" filetype = __sessions__.current.file.type if 'x86-64' in filetype: self.is_64b = True arch = '64' if self.is_64b else '32' if 'DLL' in filetype: self.ext = '.dll' to_print = [arch, 'bit DLL (Windows)'] if "native" in filetype: to_print.append('perhaps a driver (.sys)') self.log('info', ' '.join(to_print)) elif 'PE32' in filetype: self.ext = '.exe' self.log('info', ' '.join([arch, 'bit executable (Windows)'])) elif 'shared object' in filetype: self.ext = '.so' self.log('info', ' '.join([arch, 'bit shared object (linux)'])) elif 'ELF' in filetype: self.ext = '' self.log('info', ' '.join([arch, 'bit executable (linux)'])) else: self.log('error', "Unknown binary") try: self.open_radare(__sessions__.current.file.path) except: self.log('error', "Unable to start Radare2")
def run(self): super(Triage, self).run() db = Database() if self.args and self.args.all: samples = db.find(key='all') for sample in samples: tags = [] tags.extend(self._triage_file_type(sample)) db.add_tags(sample.sha256, tags) # We're running against the already opened file. else: if not __sessions__.is_set(): self.log('error', "No open session") return tags = [] tags.extend(self._triage_file_type(__sessions__.current.file)) db.add_tags(__sessions__.current.file.sha256, tags)
def module_cmdline(project=None, cmd_line=None, file_hash=None): html = "" cmd = Commands() split_commands = cmd_line.split(';') for split_command in split_commands: split_command = split_command.strip() if not split_command: continue root, args = parse(split_command) try: if root in cmd.commands: cmd_to_run = cmd.commands[root]['obj'] cmd_to_run(*args) cmd_instance = cmd_to_run.__self__ html += print_output(cmd_instance.output) del (cmd_instance.output[:]) elif root in __modules__: # if prev commands did not open a session open one on the current file if file_hash: __project__.open(project) path = get_sample_path(file_hash, __project__) __sessions__.new(path) module = __modules__[root]['obj']() module.set_commandline(args) module.run() html += print_output(module.output) if cfg.modules.store_output and __sessions__.is_set(): Database().add_analysis(file_hash, split_command, module.output) del (module.output[:]) else: html += '<p class="text-danger">{0} is not a valid command</p>'.format( cmd_line) except Exception: html += '<p class="text-danger">We were unable to complete the command {0}</p>'.format( cmd_line) __sessions__.close() return html
def module_text(file_hash, cmd_string): # redirect stdout to a memory file sys.stdout = StringIO() # A lot of commands rely on an open session # open a session on the file hash path = get_sample_path(file_hash) __sessions__.new(path) # Run the Module with args, the ouptut will end up in a StringIO Object if __sessions__.is_set(): root, args = parse(cmd_string) if root in __modules__: module = __modules__[root]['obj']() module.set_args(args) module.run() # Parse the raw data to something we can use clean_text = parse_text(sys.stdout.getvalue()) # close the session __sessions__.close() return clean_text
def run(self): super(Reports, self).run() if self.args is None: return if not HAVE_REQUESTS and not HAVE_BS4: self.log('error', "Missing dependencies (`pip install requests beautifulsoup4`)") return if not __sessions__.is_set(): self.log('error', "No open session. This command expects a file to be open.") return if self.args.malwr: self.malwr() elif self.args.threat: self.threat() elif self.args.meta: self.meta() else: self.log('error', 'At least one of the parameters is required') self.usage()
def run(self): super(Rtf, self).run() if self.args is None: return if not __sessions__.is_set(): self.log( 'error', 'No open session. This command expects a file to be open.') return if not HAVE_RTF: self.log( 'error', 'Missing dependancy. install oletools (pip install oletools)') return if self.args.list: self.list() elif self.args.save: self.save(self.args.save) else: self.parser.print_usage()
def auto(self): if not HAVE_YARA: self.log( 'error', "Missing dependency, install yara (see http://plusvic.github.io/yara/)" ) return if not __sessions__.is_set(): self.log('error', "No session opened") return rules = yara.compile(os.path.join(VIPER_ROOT, 'data/yara/rats.yara')) for match in rules.match(__sessions__.current.file.path): if 'family' in match.meta: self.log( 'info', "Automatically detected supported RAT {0}".format( match.rule)) self.get_config(match.meta['family']) return self.log('info', "No known RAT detected")
def run(self): if not __sessions__.is_set(): print_error("No session opened") return filetype = __sessions__.current.file.type if 'x86-64' in filetype: self.is_64b = True if 'DLL' in filetype: self.ext = '.dll' elif 'PE32' in filetype: self.ext = '.exe' elif 'shared object' in filetype: self.ext = '.so' elif 'ELF' in filetype: self.ext = '' else: print_error("File type not supported") return arch = '64' if self.is_64b else '32' if self.ext == '': print_info(' '.join([arch, 'bit executable (linux)'])) elif self.ext == '.so': print_info(' '.join([arch, 'bit shared object (linux)'])) elif self.ext == '.exe': print_info(' '.join([arch, 'bit executable (Windows)'])) else: to_print = [arch, 'bit DLL (Windows)'] if "native" in filetype: to_print.append('perhaps a driver (.sys)') print_info(' '.join(to_print)) try: self.open_ida(__sessions__.current.file.path) except: print_error("Unable to start IDA Pro")
def run(self): super(Strings, self).run() if self.args is None: return arg_all = self.args.all arg_hosts = self.args.hosts arg_url = self.args.url arg_browser = self.args.browser if not __sessions__.is_set(): self.log('error', "No open session") return if os.path.exists(__sessions__.current.file.path): # regexp = '[\x20\x30-\x39\x41-\x5a\x61-\x7a\-\.:\=]{4,}' regexp = '[\x20-\x7e]{4,}' strings = re.findall(regexp, __sessions__.current.file.data) leregexp = re.compile(ur'(?:[\x20-\x7E][\x00]){3,}') lestrings = [w.decode('utf-16le') for w in leregexp.findall(__sessions__.current.file.data)] if arg_all: for entry in strings: self.log('', entry) for entry in lestrings: self.log('', entry) elif arg_hosts: self.extract_hosts(strings) self.extract_hosts(lestrings) elif arg_url: self.extract_urls(strings) self.extract_urls(lestrings) elif arg_browser: self.extract_useragents(strings) self.extract_useragents(lestrings) else: self.log('error', 'At least one of the parameters is required') self.usage()
def run(self): super(PDF, self).run() if self.args is None: return if not __sessions__.is_set(): self.log('error', "No open session") return False if 'PDF' not in __sessions__.current.file.type: # A file with '%PDF' signature inside first 1024 bytes is a valid # PDF file. magic lib doesn't detect it if there is an offset header = open(__sessions__.current.file.path, 'rb').read(1024) if '%PDF' not in header: self.log('error', "The opened file doesn't appear to be a PDF document") return if self.args.subname == 'id': self.pdf_id() elif self.args.subname == 'streams': self.streams() else: self.log('error', 'At least one of the parameters is required') self.usage()
def run(self): super(MetaScan, self).run() if self.was_api_error: return if self.args: if self.args.user_agent: self.metascan.user_agent = self.args.user_agent if self.args.engines: self.show_engines() elif self.args.find: if not __sessions__.find: self.log('error', "No find result") return self.files = self.get_files_from_last_find() else: if not __sessions__.is_set(): self.log('error', "No session opened") return self.files = self.get_file_from_current_session() if self.files: summary = self.show_analyzed_info() self.show_summary(summary)
def get_config(self, family): if not __sessions__.is_set(): self.log('error', "No session opened") return try: module = importlib.import_module('modules.rats.{0}'.format(family)) except ImportError: self.log('error', "There is no module for family {0}".format(bold(family))) return config = module.config(__sessions__.current.file.data) if not config: self.log('error', "No Configuration Detected") return rows = [] for key, value in config.items(): rows.append([key, value]) rows = sorted(rows, key=lambda entry: entry[0]) self.log('info', "Configuration:") self.log('table', dict(header=['Key', 'Value'], rows=rows))
def cmd_rename(self, *args): if __sessions__.is_set(): if not __sessions__.current.file.id: self.log( 'error', "The opened file does not have an ID, have you stored it yet?" ) return self.log( 'info', "Current name is: {}".format( bold(__sessions__.current.file.name))) new_name = input("New name: ") if not new_name: self.log('error', "File name can't be empty!") return self.db.rename(__sessions__.current.file.id, new_name) self.log('info', "Refreshing session to update attributes...") __sessions__.new(__sessions__.current.file.path) else: self.log('error', "No open session")
def run(self): super(Strings, self).run() if self.args is None: return arg_all = self.args.all arg_hosts = self.args.hosts if not __sessions__.is_set(): self.log('error', "No session opened") return if os.path.exists(__sessions__.current.file.path): regexp = '[\x20\x30-\x39\x41-\x5a\x61-\x7a\-\.:]{4,}' strings = re.findall(regexp, __sessions__.current.file.data) if arg_all: for entry in strings: self.log('', entry) elif arg_hosts: self.extract_hosts(strings) else: self.log('error', 'At least one of the parameters is required') self.usage()
def run(self): if not __sessions__.is_set(): print_error("No session opened") return if not HAVE_EXIF: print_error("Missing dependency, install pyexiftool") return try: with exiftool.ExifTool() as et: metadata = et.get_metadata(__sessions__.current.file.path) except OSError: print_error("Exiftool is not installed") return rows = [] for key, value in metadata.items(): rows.append([key, value]) rows = sorted(rows, key=lambda entry: entry[0]) print_info("MetaData:") print(table(header=['Key', 'Value'], rows=rows))
def run(self): super(Irma, self).run() if self.args: if self.args.engines: self.show_engines() if self.args.list_scans: self.show_scans() if self.args.find: self.find(self.args.find) elif self.args.scan: if not __sessions__.find: self.log('error', "No find result") return self.files = self.get_files_from_last_find() else: if not __sessions__.is_set(): self.log('error', "No session opened") return self.files = self.get_file_from_current_session() if self.files: summary = self.show_analyzed_info(self.files[0][0]) #self.show_summary(summary) return
def start(self): # Logo. logo() # Setup shell auto-complete. def complete(text, state): # Try to autocomplete commands. cmds = [i for i in self.cmd.commands if i.startswith(text)] if state < len(cmds): return cmds[state] # Try to autocomplete modules. mods = [i for i in __modules__ if i.startswith(text)] if state < len(mods): return mods[state] # Then autocomplete paths. if text.startswith("~"): text = "{0}{1}".format(os.getenv("HOME"), text[1:]) return (glob.glob(text+'*')+[None])[state] # Auto-complete on tabs. readline.set_completer_delims(' \t\n;') readline.parse_and_bind('tab: complete') readline.set_completer(complete) # Save commands in history file. def save_history(path): readline.write_history_file(path) # If there is an history file, read from it and load the history # so that they can be loaded in the shell. # Now we are storing the history file in the local project folder # if there is an opened project. Otherwise just store it in the # home directory. if __project__.path: history_path = os.path.join(__project__.path, 'history') else: history_path = os.path.expanduser('~/.viperhistory') if os.path.exists(history_path): readline.read_history_file(history_path) # Register the save history at program's exit. atexit.register(save_history, path=history_path) # Main loop. while self.active: # If there is an open session, we include the path to the opened # file in the shell prompt. # TODO: perhaps this block should be moved into the session so that # the generation of the prompt is done only when the session's # status changes. prefix = '' if __project__.name: prefix = bold(cyan(__project__.name)) + ' ' if __sessions__.is_set(): prompt = prefix + cyan('viper ', True) + white(__sessions__.current.file.name, True) + cyan(' > ', True) # Otherwise display the basic prompt. else: prompt = prefix + cyan('viper > ', True) # Wait for input from the user. try: data = raw_input(prompt).strip() except KeyboardInterrupt: print("") # Terminate on EOF. except EOFError: self.stop() print("") continue # Parse the input if the user provided any. else: # If there are recognized keywords, we replace them with # their respective value. data = self.keywords(data) # Skip if the input is empty. if not data: continue # Check for output redirection # If there is a > in the string, we assume # the user wants to output to file if '>' in data: try: data, filename = data.split('>') except: filename = False else: filename = False # If the input starts with an exclamation mark, we treat the # input as a bash command and execute it. # At this point the keywords should be replaced. if data.startswith('!'): os.system(data[1:]) continue # Try to split commands by ; so that you can sequence multiple # commands at once. # For example: # viper > find name *.pdf; open --last 1; pdf id # This will automatically search for all PDF files, open the first entry # and run the pdf module against it. split_commands = data.split(';') for split_command in split_commands: split_command = split_command.strip() if not split_command: continue # If it's an internal command, we parse the input and split it # between root command and arguments. root, args = self.parse(split_command) # Check if the command instructs to terminate. if root in ('exit', 'quit'): self.stop() continue try: # If the root command is part of the embedded commands list we # execute it. if root in self.cmd.commands: self.cmd.commands[root]['obj'](*args) self.print_output(self.cmd.output, filename) del(self.cmd.output[:]) # If the root command is part of loaded modules, we initialize # the module and execute it. elif root in __modules__: module = __modules__[root]['obj']() module.set_commandline(args) module.run() self.print_output(module.output, filename) del(module.output[:]) else: print("Command not recognized.") except KeyboardInterrupt: pass except Exception as e: print_error("The command {0} raised an exception:".format(bold(root))) traceback.print_exc()
def run(self): super(Office, self).run() if self.args is None: return if not __sessions__.is_set(): self.log( 'error', "No open session. This command expects a file to be open.") return if not HAVE_OLE: self.log( 'error', "Missing dependency, install OleFileIO (`pip install olefile oletools`)" ) return file_data = __sessions__.current.file.data if file_data.startswith(b'<?xml'): OLD_XML = file_data else: OLD_XML = False if file_data.startswith( b'MIME-Version:') and 'application/x-mso' in file_data: MHT_FILE = file_data else: MHT_FILE = False # Check for old office formats try: doctype = ooxml.get_type(__sessions__.current.file.path) OOXML_FILE = True except Exception: OOXML_FILE = False # set defaults XLSX_FILE = False EXCEL_XML_FILE = False DOCX_FILE = False if OOXML_FILE is True: if doctype == ooxml.DOCTYPE_EXCEL: XLSX_FILE = True elif doctype in (ooxml.DOCTYPE_EXCEL_XML, ooxml.DOCTYPE_EXCEL_XML2003): EXCEL_XML_FILE = True elif doctype in (ooxml.DOCTYPE_WORD_XML, ooxml.DOCTYPE_WORD_XML2003): DOCX_FILE = True # Tests to check for valid Office structures. OLE_FILE = olefile.isOleFile(__sessions__.current.file.path) XML_FILE = zipfile.is_zipfile(__sessions__.current.file.path) if OLE_FILE: ole = olefile.OleFileIO(__sessions__.current.file.path) elif XML_FILE: zip_xml = zipfile.ZipFile(__sessions__.current.file.path, 'r') elif OLD_XML: pass elif MHT_FILE: pass elif DOCX_FILE: pass elif EXCEL_XML_FILE: pass elif XLSX_FILE: pass else: self.log('error', "Not a valid office document") return if self.args.export is not None: if OLE_FILE: self.export(ole, self.args.export) elif XML_FILE: self.xml_export(zip_xml, self.args.export) elif self.args.meta: if OLE_FILE: self.metadata(ole) elif XML_FILE: self.xmlmeta(zip_xml) elif self.args.streams: if OLE_FILE: self.metatimes(ole) elif XML_FILE: self.xmlstruct(zip_xml) elif self.args.oleid: if OLE_FILE: self.oleid(ole) else: self.log('error', "Not an OLE file") elif self.args.vba or self.args.code: self.parse_vba(self.args.code) elif self.args.dde: self.get_dde(__sessions__.current.file.path) else: self.log('error', 'At least one of the parameters is required') self.usage()
def run(self, *args): def string_clean(value): if value: if isinstance(value, bytes): if sys.version_info < (3, 4): value = value.decode('utf-8', 'ignore') else: value = value.decode('utf-8', 'backslashreplace') elif isinstance(value, email.header.Header): value = str(value) return re.sub('[\n\t\r]', '', str(value)) return "" def parse_ole_msg(ole): email_h = None stream_dirs = ole.listdir() for stream in stream_dirs: # get stream that contains the email header if stream[0].startswith('__substg1.0_007D'): email_h = ole.openstream(stream).read() if stream[0].endswith( '001F' ): # Unicode probably needs something better than just stripping \x00 email_h = email_h.replace(b'\x00', b'') # If it came from outlook we may need to trim some lines try: email_h = email_h.split(b'Version 2.0\x0d\x0a', 1)[1] except Exception: pass if not email_h: self.log('warning', 'This OLE file is not an email.') return None # Leaving us an RFC compliant email to parse if isinstance(email_h, str): # Python2 madness msg = email.message_from_string(email_h) else: msg = email.message_from_bytes(email_h) return msg def parse_ole_attachments(ole): # FIXME: Never used # Hard part now, each part of the attachment is in a seperate stream # need to get a unique stream id for each att # its in the streamname as an 8 digit number. header = ['#', 'Size', 'MD5', 'Filename', 'MimeType'] rows = [] for i in range( 20): # arbitrary count of emails. i dont expecet this many stream_number = str(i).zfill(8) stream_name = '__attach_version1.0_#' + stream_number # Unicode try: att_filename = ole.openstream( stream_name + '/__substg1.0_3704001F').read().decode() att_mime = ole.openstream( stream_name + '/__substg1.0_370E001F').read().decode() att_data = ole.openstream(stream_name + '/__substg1.0_37010102').read() att_size = len(att_data) att_md5 = hashlib.md5(att_data).hexdigest() rows.append([i, att_size, att_md5, att_filename, att_mime]) except Exception: pass # ASCII try: att_filename = ole.openstream( stream_name + '/__substg1.0_3704001E').read().decode() att_mime = ole.openstream( stream_name + '/__substg1.0_370E001E').read().decode() att_data = ole.openstream(stream_name + '/__substg1.0_37010102').read() att_size = len(att_data) att_md5 = hashlib.md5(att_data).hexdigest() rows.append([i, att_size, att_md5, att_filename, att_mime]) except Exception: pass self.log('table', dict(header=header, rows=rows)) def att_session(att_id, msg, ole_flag): att_count = 0 if ole_flag: ole = msg # Hard part now, each part of the attachment is in a seperate stream # need to get a unique stream id for each att # its in the streamname as an 8 digit number. for i in range( 20 ): # arbitrary count of emails. i dont expecet this many stream_number = str(i).zfill(8) stream_name = '__attach_version1.0_#' + stream_number # Unicode try: att_filename = ole.openstream( stream_name + '/__substg1.0_3704001F').read() att_filename = att_filename.replace(b'\x00', b'').decode() att_data = ole.openstream( stream_name + '/__substg1.0_37010102').read() except Exception: pass # ASCII try: att_filename = ole.openstream( stream_name + '/__substg1.0_3704001E').read().decode() att_data = ole.openstream( stream_name + '/__substg1.0_37010102').read() except Exception: pass if i == att_id: self.log( 'info', "Switching session to {0}".format(att_filename)) tmp_path = os.path.join(tempfile.gettempdir(), att_filename) with open(tmp_path, 'wb') as tmp: tmp.write(att_data) __sessions__.new(tmp_path) return else: for part in msg.walk(): if part.get_content_type() == 'message/rfc822': rfc822 = True else: rfc822 = False if part.get_content_maintype( ) == 'multipart' or not part.get( 'Content-Disposition') and not rfc822: continue att_count += 1 if att_count == att_id: if rfc822: data = part.as_string() m = re.match( "Content-Type: message/rfc822\r?\n\r?\n(.*)", data, flags=re.S) if not m: self.log( 'error', "Could not extract RFC822 formatted message" ) return data = m.group(1) att_size = len(data) filename = "rfc822msg_{0}.eml".format(att_size) else: data = part.get_payload(decode=True) filename = part.get_filename() self.log('info', "Switching session to {0}".format(filename)) if data: tmp_path = os.path.join(tempfile.gettempdir(), filename) with open(tmp_path, 'wb') as tmp: tmp.write(data) __sessions__.new(tmp_path) return def email_envelope(msg): # Envelope self.log('info', "Email envelope:") rows = [['Subject', msg.get("Subject")], ['To', msg.get("To")], ['From', msg.get("From")], ['Cc', msg.get("Cc")], ['Bcc', msg.get("Bcc")], ['Date', msg.get("Date")]] self.log('table', dict(header=['Key', 'Value'], rows=rows)) return def email_header(msg): # Headers rows = [] for x in msg.keys(): # Adding Received to ignore list. this has to be handled separately if there are more then one line if x not in [ 'Subject', 'From', 'To', 'Date', 'Cc', 'Bcc', 'DKIM-Signature', 'Received' ]: rows.append([x, string_clean(msg.get(x))]) for x in msg.get_all('Received'): rows.append(['Received', string_clean(x)]) self.log('info', "Email headers:") rows = sorted(rows, key=lambda entry: entry[0]) self.log('table', dict(header=['Key', 'Value'], rows=rows)) return def email_trace(msg, verbose): rows = [] if verbose: fields = ['from', 'by', 'with', 'id', 'for', 'timestamp'] else: fields = ['from', 'by', 'timestamp'] for x in msg.get_all('Received'): x = string_clean(x) cre = re.compile(r""" (?: from \s+ (?P<from>.*?) (?=by|with|id|ID|for|;|$) )? (?: by \s+ (?P<by>.*?) (?=with|id|ID|for|;|$) )? (?: with \s+ (?P<with>.*?) (?=id|ID|for|;|$) )? (?: (id|ID) \s+ (?P<id>.*?) (?=for|;|$) )? (?: for \s+ (?P<for>.*?) (?=;|$) )? (?: \s* ; \s* (?P<timestamp>.*) )? """, flags=re.X | re.I) m = cre.search(x) if not m: self.log('error', "Received header regex didn't match") return t = [] for groupname in fields: t.append(string_clean(m.group(groupname))) rows.insert(0, t) self.log('info', "Email path trace:") self.log('table', dict(header=fields, rows=rows)) return def email_spoofcheck(msg, dnsenabled): self.log('info', "Email spoof check:") # test 1: check if From address is the same as Sender, Reply-To, and Return-Path rows = [['Sender', string_clean(msg.get("Sender"))], ['From', string_clean(msg.get("From"))], ['Reply-To', string_clean(msg.get("Reply-To"))], ['Return-Path', string_clean(msg.get("Return-Path"))]] self.log('table', dict(header=['Key', 'Value'], rows=rows)) addr = { 'Sender': email.utils.parseaddr(string_clean(msg.get("Sender")))[1], 'From': email.utils.parseaddr(string_clean(msg.get("From")))[1], 'Reply-To': email.utils.parseaddr(string_clean(msg.get("Reply-To")))[1], 'Return-Path': email.utils.parseaddr(string_clean(msg.get("Return-Path")))[1] } if addr['From'] == '': self.log('error', "No From address!") return elif addr['Sender'] and (addr['From'] != addr['Sender']): self.log('warning', "Email FAILED: From address different than Sender") elif addr['Reply-To'] and (addr['From'] != addr['Reply-To']): self.log('warning', "Email FAILED: From address different than Reply-To") elif addr['Return-Path'] and (addr['From'] != addr['Return-Path']): self.log( 'warning', "Email FAILED: From address different than Return-Path") else: self.log( 'success', "Email PASSED: From address the same as Sender, Reply-To, and Return-Path" ) # test 2: check to see if first Received: by domain matches sender MX domain if not dnsenabled: self.log( 'info', "Unable to run Received by / sender check without dnspython available" ) else: r = msg.get_all('Received')[-1] m = re.search(r"by\s+(\S*?)(?:\s+\(.*?\))?\s+with", r) if not m: self.log('error', "Received header regex didn't match") return byname = m.group(1) # this can be either a name or an IP m = re.search(r"(\w+\.\w+|\d+\.\d+\.\d+\.\d+)$", byname) if not m: self.log( 'error', "Could not find domain or IP in Received by field") return bydomain = m.group(1) domains = [['Received by', bydomain]] # if it's an IP, do the reverse lookup m = re.search(r"\.\d+$", bydomain) if m: bydomain = str( dns.reversename.from_address(bydomain)).strip('.') domains.append(['Received by reverse lookup', bydomain]) # if the email has a Sender header, use that if addr['Sender'] != "": m = re.search(r"(\w+\.\w+)$", addr['Sender']) if not m: self.log('error', "Sender header regex didn't match") return fromdomain = m.group(1) domains.append(['Sender', fromdomain]) # otherwise, use the From header else: m = re.search(r"(\w+\.\w+)$", addr['From']) if not m: self.log('error', "From header regex didn't match") return fromdomain = m.group(1) domains.append(['From', fromdomain]) bymatch = False try: mx = dns.resolver.query(fromdomain, 'MX') if mx: for rdata in mx: m = re.search(r"(\w+\.\w+).$", str(rdata.exchange)) if not m: self.log('error', "MX domain regex didn't match") continue domains.append( ['MX for ' + fromdomain, m.group(1)]) if bydomain == m.group(1): bymatch = True self.log('table', dict(header=['Key', 'Value'], rows=domains)) except Exception: domains.append( ['MX for ' + fromdomain, "not registered in DNS"]) self.log('table', dict(header=['Key', 'Value'], rows=domains)) if bymatch: self.log( 'success', "Email PASSED: Received by domain found in Sender/From MX domains" ) else: self.log( 'warning', "Email FAILED: Could not match Received by domain to Sender/From MX" ) # test 3: look at SPF records rspf = [] results = set() allspf = msg.get_all('Received-SPF') if not allspf: return for spf in allspf: # self.log('info', string_clean(spf)) m = re.search(r"\s*(\w+)\s+\((.*?):\s*(.*?)\)\s+(.*);", string_clean(spf)) if not m: self.log('error', "Received-SPF regex didn't match") return rspf.append([m.group(2), m.group(1), m.group(3), m.group(4)]) results = results | {m.group(1)} self.log( 'table', dict(header=['Domain', 'Action', 'Info', 'Additional'], rows=rspf)) if results & {'fail', 'softfail'}: self.log('warning', "Email FAILED: Found fail or softfail SPF results") elif results & {'none', 'neutral'}: self.log('warning', "Email NEUTRAL: Found none or neutral SPF results") elif results & {'permerror', 'temperror'}: self.log('warning', "Email NEUTRAL: Found error condition") elif results & {'pass'}: self.log('success', "Email PASSED: Found SPF pass result") return def email_attachments(msg, ole_flag): # Attachments att_count = 0 rows = [] links = [] if ole_flag: ole = msg # Hard part now, each part of the attachment is in a seperate stream # need to get a unique stream id for each att # its in the streamname as an 8 digit number. for i in range( 20 ): # arbitrary count of emails. i dont expecet this many stream_number = str(i).zfill(8) stream_name = '__attach_version1.0_#' + stream_number # Unicode try: att_filename = ole.openstream( stream_name + '/__substg1.0_3704001F').read() att_filename = att_filename.replace(b'\x00', b'').decode() att_mime = ole.openstream( stream_name + '/__substg1.0_370E001F').read() att_mime = att_mime.replace(b'\x00', b'').decode() att_data = ole.openstream( stream_name + '/__substg1.0_37010102').read() att_size = len(att_data) att_md5 = hashlib.md5(att_data).hexdigest() rows.append( [i, att_filename, att_mime, att_size, att_md5]) att_count += 1 except Exception: pass # ASCII try: att_filename = ole.openstream( stream_name + '/__substg1.0_3704001E').read().decode() att_mime = ole.openstream( stream_name + '/__substg1.0_370E001E').read().decode() att_data = ole.openstream( stream_name + '/__substg1.0_37010102').read() att_size = len(att_data) att_md5 = hashlib.md5(att_data).hexdigest() rows.append( [i, att_filename, att_mime, att_size, att_md5]) att_count += 1 except Exception: pass else: # Walk through email string. for part in msg.walk(): content_type = part.get_content_type() if content_type == 'multipart': continue if content_type in ('text/plain', 'text/html'): part_content = part.get_payload(decode=True) for link in re.findall(rb'(https?://[^"<>\s]+)', part_content): if link not in links: links.append(link.decode()) if content_type == 'message/rfc822': part_content = part.as_string() m = re.match( "Content-Type: message/rfc822\r?\n\r?\n(.*)", part_content, flags=re.S) if not m: self.log( 'error', "Could not extract RFC822 formatted message") return part_content = m.group(1) att_size = len(part_content) att_file_name = "rfc822msg_{0}.eml".format(att_size) att_md5 = hashlib.md5(part_content).hexdigest() att_count += 1 rows.append([ att_count, att_file_name, content_type, att_size, att_md5 ]) continue if not part.get('Content-Disposition'): # These are not attachments. continue att_file_name = part.get_filename() att_size = len(part_content) if not att_file_name: continue att_data = part.get_payload(decode=True) att_md5 = hashlib.md5(att_data).hexdigest() att_count += 1 rows.append([ att_count, att_file_name, part.get_content_type(), att_size, att_md5 ]) self.log('info', "Email attachments (total: {0}):".format(att_count)) if att_count > 0: self.log( 'table', dict(header=[ 'ID', 'FileName', 'Content Type', 'File Size', 'MD5' ], rows=rows)) self.log('info', "Email links:") for link in links: self.log('item', link) return super(EmailParse, self).run(*args) if self.args is None: return # Start Here if not __sessions__.is_set(): self.log( 'error', "No open session. This command expects a file to be open.") return # see if we can load the dns library for MX lookup spoof detection try: import dns.resolver import dns.reversename dnsenabled = True except ImportError: dnsenabled = False # Try to open as an ole msg, if not treat as email string try: ole = olefile.OleFileIO(__sessions__.current.file.data) msg = parse_ole_msg(ole) if not msg: return ole_flag = True except Exception: ole_flag = False if sys.version_info < (3, 0): msg = email.message_from_string(__sessions__.current.file.data) else: msg = email.message_from_bytes(__sessions__.current.file.data) if self.args.open is not None: if ole_flag: msg = ole att_session(self.args.open, msg, ole_flag) elif self.args.envelope: email_envelope(msg) elif self.args.attach: if ole_flag: msg = ole email_attachments(msg, ole_flag) elif self.args.header: email_header(msg) elif self.args.trace: email_trace(msg, False) elif self.args.traceall: email_trace(msg, True) elif self.args.spoofcheck: email_spoofcheck(msg, dnsenabled) elif self.args.all: email_envelope(msg) email_header(msg) email_trace(msg, True) email_spoofcheck(msg, dnsenabled) if ole_flag: msg = ole email_attachments(msg, ole_flag) else: self.log('error', 'At least one of the parameters is required') self.usage()
def start(self): # log start log.info('Starting viper-cli') # Logo. logo() # Setup shell auto-complete. def complete(text, state): # filesystem path completion only makes sense for a few commands/modules fs_path_completion = False # clean up user input so far (no leading/trailing/duplicate spaces) line = " ".join(readline.get_line_buffer().split()) words = line.split(" ") # split words; e.g. store -f /tmp -> ['store', '-f', '/tmp'] if words[0] in [i for i in self.cmd.commands]: # handle completion for commands # enable filesystem path completion for certain commands (e.g. export, store) if words[0] in [x for x in self.cmd.commands if self.cmd.commands[x]["fs_path_completion"]]: fs_path_completion = True options = [key for key in self.cmd.commands[words[0]]["parser_args"]] # enable tab completion for projects --switch if words[0] == "projects": if "--switch" in words or "-s" in words: options += get_project_list() # enable tab completion for copy (list projects) if words[0] == "copy": options += get_project_list() completions = [i for i in options if i.startswith(text) and i not in words] elif words[0] in [i for i in __modules__]: # handle completion for modules if len(words) == 1: # only the module name is give so far - present all args and the subparsers (if any) options = [key for key in __modules__[words[0]]["parser_args"]] options += [key for key in __modules__[words[0]]["subparser_args"]] elif len(words) == 2: # 1 complete word and one either complete or incomplete that specifies the subparser or an arg if words[1] in list(__modules__[words[0]]["parser_args"]): # full arg for a module is given options = [key for key in __modules__[words[0]]["parser_args"]] elif words[1] in list(__modules__[words[0]]["subparser_args"]): # subparser is specified - get all subparser args options = [key for key in __modules__[words[0]]["subparser_args"][words[1]]] else: options = [key for key in __modules__[words[0]]["parser_args"]] options += [key for key in __modules__[words[0]]["subparser_args"]] else: # more that 2 words if words[1] in list(__modules__[words[0]]["subparser_args"]): # subparser is specified - get all subparser args options = [key for key in __modules__[words[0]]["subparser_args"][words[1]]] else: options = [key for key in __modules__[words[0]]["parser_args"]] completions = [i for i in options if i.startswith(text) and i not in words] else: # initial completion for both commands and modules completions = [i for i in self.cmd.commands if i.startswith(text)] completions += [i for i in __modules__ if i.startswith(text)] if state < len(completions): return completions[state] if fs_path_completion: # completion for paths only if it makes sense if text.startswith("~"): text = "{0}{1}".format(expanduser("~"), text[1:]) return (glob.glob(text + '*') + [None])[state] return # Auto-complete on tabs. readline.set_completer_delims(' \t\n;') readline.parse_and_bind('tab: complete') readline.set_completer(complete) # Save commands in history file. def save_history(path): readline.write_history_file(path) # If there is an history file, read from it and load the history # so that they can be loaded in the shell. # Now we are storing the history file in the local project folder history_path = os.path.join(__project__.path, 'history') if os.path.exists(history_path): readline.read_history_file(history_path) readline.set_history_length(10000) # Register the save history at program's exit. atexit.register(save_history, path=history_path) # Main loop. while self.active: # If there is an open session, we include the path to the opened # file in the shell prompt. # TODO: perhaps this block should be moved into the session so that # the generation of the prompt is done only when the session's # status changes. prefix = '' if __project__.name: prefix = bold(cyan(__project__.name)) + ' ' if __sessions__.is_set(): stored = '' filename = '' if __sessions__.current.file: filename = __sessions__.current.file.name if not Database().find(key='sha256', value=__sessions__.current.file.sha256): stored = magenta(' [not stored]', True) misp = '' if __sessions__.current.misp_event: misp = ' [MISP' if __sessions__.current.misp_event.event.id: misp += ' {}'.format(__sessions__.current.misp_event.event.id) else: misp += ' New Event' if __sessions__.current.misp_event.off: misp += ' (Offline)' misp += ']' prompt = (prefix + cyan('viper ', True) + white(filename, True) + blue(misp, True) + stored + cyan(' > ', True)) # Otherwise display the basic prompt. else: prompt = prefix + cyan('viper > ', True) # force str (Py3) / unicode (Py2) for prompt if sys.version_info <= (3, 0): prompt = prompt.encode('utf-8') else: prompt = str(prompt) # Wait for input from the user. try: data = input(prompt).strip() except KeyboardInterrupt: print("") # Terminate on EOF. except EOFError: self.stop() print("") continue # Parse the input if the user provided any. else: # If there are recognized keywords, we replace them with # their respective value. data = self.keywords(data) # Skip if the input is empty. if not data: continue # Check for output redirection # If there is a > in the string, we assume the user wants to output to file. if '>' in data: data, console_output['filename'] = data.split('>', 1) if ';' in console_output['filename']: console_output['filename'], more_commands = console_output['filename'].split(';', 1) data = '{};{}'.format(data, more_commands) print("Writing output to {0}".format(console_output['filename'].strip())) # If the input starts with an exclamation mark, we treat the # input as a bash command and execute it. # At this point the keywords should be replaced. if data.startswith('!'): os.system(data[1:]) continue # Try to split commands by ; so that you can sequence multiple # commands at once. # For example: # viper > find name *.pdf; open --last 1; pdf id # This will automatically search for all PDF files, open the first entry # and run the pdf module against it. split_commands = data.split(';') for split_command in split_commands: split_command = split_command.strip() if not split_command: continue # If it's an internal command, we parse the input and split it # between root command and arguments. root, args = self.parse(split_command) # Check if the command instructs to terminate. if root in ('exit', 'quit'): self.stop() continue try: # If the root command is part of the embedded commands list we # execute it. if root in self.cmd.commands: self.cmd.commands[root]['obj'](*args) del(self.cmd.output[:]) # If the root command is part of loaded modules, we initialize # the module and execute it. elif root in __modules__: module = __modules__[root]['obj']() module.set_commandline(args) module.run() if cfg.modules.store_output and __sessions__.is_set(): try: Database().add_analysis(__sessions__.current.file.sha256, split_command, module.output) except Exception: pass del(module.output[:]) else: print("Command not recognized.") except KeyboardInterrupt: pass except Exception: print_error("The command {0} raised an exception:".format(bold(root))) traceback.print_exc() console_output['filename'] = None # reset output to stdout
def run(self): super(Shellcode, self).run() if self.args is None: return if not __sessions__.is_set(): self.log('error', "No session opened") return collection = [ { 'description': 'FS:[30h] shellcode', 'patterns': [ b'\x64\xa1\x30\x00|\x64\x8b\x0d\x30|\x64\x8b\x0d\x30|\x64\x8b\x15\x30|\x64\x8b\x35\x30|\x64\x8b\x3d\x30|\x6a\x30.\x64\x8b|\x33..\xb3\x64\x8b', '64a13000|648b0d30|648b0d30|648b1530|648b3530|648b3d30|6a30..648b|33....b3648b' ] }, { 'description': 'FS:[00h] shellcode', 'patterns': [ b'\x64\x8b\x1d|\x64\xa1\x00|\x64\x8b\x0d|\x64\x8b\x15|\x64\x8b\x35|\x64\x8b\x3d', '648b1d00|64a10000|648b0d00|648b1500|648b3500|648b3d00' ] }, { 'description': 'API hashing', 'patterns': [ b'\x74.\xc1.\x0d\x03|\x74.\xc1.\x07\x03', '74..c1..0d03|74..c1..0703' ] }, { 'description': 'PUSH DWORD[]/CALL[]', 'patterns': [b'\x00\xff\x75\x00\xff\x55', '00ff7500ff55'] }, { 'description': 'FLDZ/FSTENV [esp-12]', 'patterns': [ b'\x00\xd9\x00\xee\x00\xd9\x74\x24\x00\xf4\x00\x00', '00d900ee00d9742400f40000' ] }, { 'description': 'CALL next/POP', 'patterns': [ b'\x00\xe8\x00\x00\x00\x00(\x58|\x59|\x5a|\x5b|\x5e|\x5f|\x5d)\x00\x00', '00e800000000(58|59|5a|5b|5e|5f|5d)0000' ] }, { 'description': 'Function prolog', 'patterns': [ b'\x55\x8b\x00\xec\x83\x00\xc4|\x55\x8b\x0ec\x81\x00\xec|\x55\x8b\x00\xec\x8b|\x55\x8b\x00\xec\xe8|\x55\x8b\x00\xec\xe9', '558b00ec8300c4|558b0ec8100ec|558b00ec8b|558b00ece8|558b00ece9' ] }, ] self.log('info', "Searching for known shellcode patterns...") for entry in collection: for pattern in entry['patterns']: match = re.search(pattern, __sessions__.current.file.data) if match: offset = match.start() self.log( 'info', "{0} pattern matched at offset {1}".format( entry['description'], offset)) self.log( '', cyan( hexdump(__sessions__.current.file.data[offset:], maxlines=15)))
def cmd_notes(self, *args): parser = argparse.ArgumentParser(prog="notes", description="Show information on the opened file") group = parser.add_mutually_exclusive_group() group.add_argument('-l', '--list', action="store_true", help="List all notes available for the current file") group.add_argument('-a', '--add', action="store_true", help="Add a new note to the current file") group.add_argument('-v', '--view', metavar='note_id', type=int, help="View the specified note") group.add_argument('-e', '--edit', metavar='note_id', type=int, help="Edit an existing note") group.add_argument('-d', '--delete', metavar='note_id', type=int, help="Delete an existing note") try: args = parser.parse_args(args) except: return if not __sessions__.is_set(): self.log('error', "No session opened") return if args.list: # Retrieve all notes for the currently opened file. malware = Database().find(key='sha256', value=__sessions__.current.file.sha256) if not malware: self.log('error', "The opened file doesn't appear to be in the database, have you stored it yet?") return notes = malware[0].note if not notes: self.log('info', "No notes available for this file yet") return # Populate table rows. rows = [[note.id, note.title] for note in notes] # Display list of existing notes. self.log('table', dict(header=['ID', 'Title'], rows=rows)) elif args.add: title = raw_input("Enter a title for the new note: ") # Create a new temporary file. tmp = tempfile.NamedTemporaryFile(delete=False) # Open the temporary file with the default editor, or with nano. os.system('"${EDITOR:-nano}" ' + tmp.name) # Once the user is done editing, we need to read the content and # store it in the database. body = tmp.read() Database().add_note(__sessions__.current.file.sha256, title, body) # Finally, remove the temporary file. os.remove(tmp.name) self.log('info', "New note with title \"{0}\" added to the current file".format(bold(title))) elif args.view: # Retrieve note wth the specified ID and print it. note = Database().get_note(args.view) if note: self.log('info', bold('Title: ') + note.title) self.log('info', bold('Body:') + '\n' + note.body) else: self.log('info', "There is no note with ID {0}".format(args.view)) elif args.edit: # Retrieve note with the specified ID. note = Database().get_note(args.edit) if note: # Create a new temporary file. tmp = tempfile.NamedTemporaryFile(delete=False) # Write the old body to the temporary file. tmp.write(note.body) tmp.close() # Open the old body with the text editor. os.system('"${EDITOR:-nano}" ' + tmp.name) # Read the new body from the temporary file. body = open(tmp.name, 'r').read() # Update the note entry with the new body. Database().edit_note(args.edit, body) # Remove the temporary file. os.remove(tmp.name) self.log('info', "Updated note with ID {0}".format(args.edit)) elif args.delete: # Delete the note with the specified ID. Database().delete_note(args.delete) else: parser.print_usage()
def cmd_store(self, *args): parser = argparse.ArgumentParser(prog="store", description="Store the opened file to the local repository") parser.add_argument('-d', '--delete', action="store_true", help="Delete the original file") parser.add_argument('-f', '--folder', type=str, nargs='+', help="Specify a folder to import") parser.add_argument('-s', '--file-size', type=int, help="Specify a maximum file size") parser.add_argument('-y', '--file-type', type=str, help="Specify a file type pattern") parser.add_argument('-n', '--file-name', type=str, help="Specify a file name pattern") parser.add_argument('-t', '--tags', type=str, nargs='+', help="Specify a list of comma-separated tags") try: args = parser.parse_args(args) except: return if args.folder is not None: # Allows to have spaces in the path. args.folder = " ".join(args.folder) if args.tags is not None: # Remove the spaces in the list of tags args.tags = "".join(args.tags) def add_file(obj, tags=None): if get_sample_path(obj.sha256): self.log('warning', "Skip, file \"{0}\" appears to be already stored".format(obj.name)) return False # Try to store file object into database. status = self.db.add(obj=obj, tags=tags) if status: # If succeeds, store also in the local repository. # If something fails in the database (for example unicode strings) # we don't want to have the binary lying in the repository with no # associated database record. new_path = store_sample(obj) self.log("success", "Stored file \"{0}\" to {1}".format(obj.name, new_path)) else: return False # Delete the file if requested to do so. if args.delete: try: os.unlink(obj.path) except Exception as e: self.log('warning', "Failed deleting file: {0}".format(e)) return True # If the user specified the --folder flag, we walk recursively and try # to add all contained files to the local repository. # This is note going to open a new session. # TODO: perhaps disable or make recursion optional? if args.folder is not None: # Check if the specified folder is valid. if os.path.isdir(args.folder): # Walk through the folder and subfolders. for dir_name, dir_names, file_names in os.walk(args.folder): # Add each collected file. for file_name in file_names: file_path = os.path.join(dir_name, file_name) if not os.path.exists(file_path): continue # Check if file is not zero. if not os.path.getsize(file_path) > 0: continue # Check if the file name matches the provided pattern. if args.file_name: if not fnmatch.fnmatch(file_name, args.file_name): # self.log('warning', "Skip, file \"{0}\" doesn't match the file name pattern".format(file_path)) continue # Check if the file type matches the provided pattern. if args.file_type: if args.file_type not in File(file_path).type: # self.log('warning', "Skip, file \"{0}\" doesn't match the file type".format(file_path)) continue # Check if file exceeds maximum size limit. if args.file_size: # Obtain file size. if os.path.getsize(file_path) > args.file_size: self.log('warning', "Skip, file \"{0}\" is too big".format(file_path)) continue file_obj = File(file_path) # Add file. add_file(file_obj, args.tags) else: self.log('error', "You specified an invalid folder: {0}".format(args.folder)) # Otherwise we try to store the currently opened file, if there is any. else: if __sessions__.is_set(): if __sessions__.current.file.size == 0: self.log('warning', "Skip, file \"{0}\" appears to be empty".format(__sessions__.current.file.name)) return False # Add file. if add_file(__sessions__.current.file, args.tags): # Open session to the new file. self.cmd_open(*[__sessions__.current.file.sha256]) else: self.log('error', "No session opened")