def phpinfo(web): found = 0x00 #print(R+'\n =============================') #print(R+' P H P I N F O F I N D E R') #print(R+' =============================\n') from core.methods.print import posintact posintact("phpinfo finder") print(GR+' [*] Importing file paths...') if os.path.exists('files/fuzz-db/phpinfo_paths.lst'): with open('files/fuzz-db/phpinfo_paths.lst','r') as paths: for path in paths: path = '/' + path.replace('\n','') pathsinfo.append(path) print(O+' [!] Starting bruteforce...') for p in pathsinfo: web0x00 = web + p req = requests.get(web0x00, allow_redirects=False, verify=False) if (req.status_code == 200 or req.status_code == 302): if re.search(r'\<title\>phpinfo()\<\/title\>|\<h1 class\=\"p\"\>PHP Version',req.content): found = 0x01 print(G+' [+] Found PHPInfo File At : '+O+web0x00) else: print(B+' [*] Checking : '+C+web0x00+R+' ('+str(req.status_code)+')') if found == 0x00: print(R+'\n [-] Did not find PHPInfo file...\n') else: print(R+' [-] Bruteforce filepath not found!')
def apachestat(web): requests = session() flag = 0x00 print(GR + ' [*] Loading module...') time.sleep(0.7) #print(R+'\n ===========================') #print(R+' A P A C H E S T A T U S ') #print(R+' ===========================\n') from core.methods.print import posintact posintact("apache status") print(C + ' [*] Importing fuzz parameters...') time.sleep(0.7) print(GR + ' [*] Initializing bruteforce...') with open('files/fuzz-db/apachestat_paths.lst', 'r') as paths: for path in paths: path = path.replace('\n', '') url = web + path print(B + ' [+] Trying : ' + C + url) resp = requests.get(url, allow_redirects=False, verify=False, timeout=7) if resp.status_code == 200 or resp.status_code == 302: print(O + ' [+] Apache Server Status Enabled at :' + C + color.TR3 + C + G + url + C + color.TR2 + C) flag = 0x01 if flag == 0x00: print(R + ' [-] No server status enabled!') print(C + ' [+] Apache server status completed!\n')
def sharedns(web): requests = session() web = web.split('//')[1] #print(R+'\n =========================================') #print(R+' S H A R E D D N S H O S T N A M E S ') #print(R+' =========================================\n') from core.methods.print import posintact posintact("shared dns hostnames") print(C + ' [!] Looking up for name servers on which website is hosted...\n' + G) time.sleep(0.7) system('dig +nocmd ' + web + ' ns +noall +answer') h = input(C + '\n [*] Enter any DNS Server from above :> ') time.sleep(0.4) print(GR + ' [!] Discovering hosts on same DNS Server...') time.sleep(0.4) print(GR + " [~] Result: \n" + color.END) domains = [h] for dom in domains: text = requests.get('http://api.hackertarget.com/findshareddns/?q=' + dom).text dns = str(text) if 'error' in dns: print(R + ' [-] Outbound Query Exception!\n') time.sleep(0.8) elif 'No results found' in dns: print(R + ' [-] No shared DNS nameserver hosts...') else: p = dns.splitlines() for i in p: print(O + ' [+] Site found :>' + C + color.TR3 + C + G + i + C + color.TR2 + C) time.sleep(0.02)
def robot(web): requests = session() #print(R+'\n =============================') #print(R+' R O B O T S C H E C K E R') #print(R+' =============================\n') from core.methods.print import posintact posintact("robots checker") url = web + '/robots.txt' print(' [!] Testing for robots.txt...\n') try: resp = requests.get(url).text m = str(resp) print(G + ' [+] Robots.txt found!' + C + color.TR2 + C) print(GR + ' [*] Displaying contents of robots.txt...') print(color.END + m + C) except: print(R + ' [-] Robots.txt not found') print(' [!] Testing for sitemap.xml...\n') url0 = web + '/sitemap.xml' try: resp = requests.get(url0).text m = str(resp) print(G + ' [+] Sitemap.xml found!' + C + color.TR2 + C) print(GR + ' [*] Displaying contents of sitemap.xml') print(color.END + m + C) except: print(R + ' [-] Sitemap.xml not found')
def piwebenum(web): name = targetname(web) lvl2 = "piwebenum" module = "ReconANDOSINT" lvl1 = "Active Reconnaissance" lvl3 = "" requests = session() time.sleep(0.4) web = web.split('//')[1] #print(R+'\n =============================================') #print(R+' P I N G / N P I N G E N U M E R A T I O N') #print(R+' =============================================\n') from core.methods.print import posintact posintact("(n)ping enumeration") print(GR + ' [!] Pinging website...') time.sleep(0.5) print(C+' [*] Using adaptative ping and debug mode with count 5...') time.sleep(0.4) print(GR+' [!] Press Ctrl+C to stop\n'+color.END) os.system('ping -D -c 5 '+ web) print('') time.sleep(0.6) print(C+' [*] Trying NPing (NMap Ping)...') print(C+" [~] Result: \n") print('') text = requests.get('http://api.hackertarget.com/nping/?q=' + web).text nping = str(text) print(color.END+ nping +C+'\n') save_data(database, module, lvl1, lvl2, lvl3, name, nping)
def sslcert(web): if 'https' not in web: print(R+' [-] Website does not use SSL...') else: if str(web).split("/")[2]: web = str(web).split("/")[2] elif str(web).split("/")[3]: web = str(web).split("/")[2] cerp = [] #print(R+'\n =========================================') #print(R+' S S L C E R T I F I C A T E I N F O') #print(R+' =========================================\n') from core.methods.print import posintact posintact("ssl certificate info") time.sleep(0.3) context = ssl.create_default_context() server = context.wrap_socket(socket.socket(), server_hostname=web) server.connect((web, 443)) cer = server.getpeercert() cerpec = server.cipher() for x in cerpec: cerp.append(x) print(B+" [+] Certificate Serial Number : "+W+ str(cer.get('serialNumber'))) print(B+" [+] Certificate SSL Version : "+W+ str(cer.get('version'))) print(B+' [+] SSL Cipher Suite : '+W+ str(cerp[0])) print(B+' [+] Encryption Protocol : '+W+ str(cerp[1])) print(B+' [+] Encryption Type : '+W+ str(cerp[2]) + ' bit') print(B+' [+] Certificate as Raw : \n'+W+str(cer))
def grabhead(web): name = targetname(web) lvl2 = "grabhead" module = "ReconANDOSINT" lvl1 = "Active Reconnaissance" lvl3 = "" time.sleep(0.4) #print(R+'\n ==================================') #print(R+' G R A B H T T P H E A D E R S') #print(R+' ===================================\n') from core.methods.print import posintact posintact("grab http headers") print(GR + color.BOLD + ' [*] Grabbing HTTP Headers...') time.sleep(0.4) web = web.rstrip() try: headerwhole = str(urllib.request.urlopen(web).info()) header = headerwhole.splitlines() print('') for m in header: n = m.split(':') print(' ' + C + n[0] + ': ' + C + n[1]) print('') save_data(database, module, lvl1, lvl2, lvl3, name, headerwhole) except urllib.error.HTTPError as e: print(R + ' [-] ' + e.__str__())
def subdom(web): global fileo if 'http' in web: web = web.replace('http://', '') web = web.replace('https://', '') webb = web if "@" in web: webb = web.split("@")[1] fileo = 'tmp/logs/' + webb + '-logs/' + str(webb) + '-subdomains.lst' p = open(fileo, 'w+') p.close #print(R+'\n =====================================') #print(R+' S U B D O M A I N G A T H E R E R') #print(R+' =====================================\n') from core.methods.print import posintact posintact("subdomain gatherer") time.sleep(0.7) print(B + ' [*] Initializing Step [1]...') subdombrute(web) print(G + '\n [+] Module [1] Bruteforce Completed!\n') print(B + ' [*] Initializing Step [2]...') outer(web) print(G + ' [+] Module [2] API Retriever Completed!\n') acc = report(web, found, final) print(O + ' [*] Writing found subdomains to a file...') if acc: for pwn in acc: vul = str(pwn) + '\n' miv = open(fileo, 'a') miv.write(vul) miv.close() print(G + ' [+] Done!')
def httpmethods(web): try: #print(R+'\n =========================') #print(R+' H T T P M E T H O D S ') #print(R+' =========================\n') from core.methods.print import posintact posintact("http methods") print(GR+' [*] Parsing Url...') time.sleep(0.7) web = web.replace('https://','') web = web.replace('http://','') print(C+' [!] Making the connection...') conn = http.client.HTTPConnection(web) conn.request('OPTIONS','/') response = conn.getresponse() q = str(response.getheader('allow')) if 'None' not in q: print(G+' [+] The following HTTP methods are allowed...'+C+color.TR2+C) methods = q.split(',') for method in methods: print(O+' '+method+C) else: print(R+' [-] HTTP Method Options Request Unsuccessful...') except Exception as e: print(R+' [-] Exception Encountered!') print(R+' [-] Error : '+str(e))
def altsites(web): requests = session() #print(R+'\n ===================================') #print(R+' A L T E R N A T I V E S I T E S') #print(R+' ===================================\n') from core.methods.print import posintact posintact("alternative sites") print(GR + ' [*] Setting User-Agents...') time.sleep(0.7) user_agents = { 'Chrome on Windows 8.1': 'Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2214.115 Safari/537.36', 'Safari on iOS': 'Mozilla/5.0 (iPhone; CPU iPhone OS 8_1_3 like Mac OS X) AppleWebKit/600.1.4 (KHTML, like Gecko) Version/8.0 Mobile/12B466 Safari/600.1.4', 'IE6 on Windows XP': 'Mozilla/5.0 (Windows; U; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 2.0.50727)', 'Googlebot': 'Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)' } print(GR + '\n [*] Preparing for series of requests...') for name, agent in user_agents.items(): print(B + ' [+] Using User-Agent : ' + C + name) print(GR + ' [+] UA Value : ' + O + agent) headers = {'User-Agent': agent} print(GR + ' [*] Making the request...') req = requests.get(web, headers=headers, allow_redirects=True, verify=True) responses[name] = req print(C + '\n [!] Comparing base value standards...') time.sleep(0.5) for name, response in responses.items(): print(O + ' [+] User-Agent :' + C + color.TR3 + C + G + name + C + color.TR2 + C) print(O + ' [+] Response :' + C + color.TR3 + C + G + str(response) + C + color.TR2 + C) md5s[name] = hashlib.md5(response.text.encode('utf-8')).hexdigest() print(C + '\n [!] Matching hexdigest signatures...') for name, md5 in md5s.items(): print(O + ' [+] User-Agent :' + C + color.TR3 + C + G + name + C + color.TR2 + C) print(O + ' [+] Hex-Digest :' + C + color.TR3 + C + G + str(md5) + C + color.TR2 + C) if name != 'Chrome on Windows 8.1': if md5 != md5s['Chrome on Windows 8.1']: print(G + ' [+] ' + str(name) + ' differs fromk baseline!' + C + color.TR2 + C) else: print( R + ' [-] No alternative site found via User-Agent spoofing:' + str(md5)) print(C + '\n [+] Alternate Site Discovery Completed!\n')
def commentssrc(web): requests = session() #print(R+'\n =================================') #print(R+' C O M M E N T S S C R A P E R') #print(R+' =================================') from core.methods.print import posintact posintact("comment scraper") print(C + ' [It is recommended to run ScanEnum/Crawlers') print(C + ' before running this module]\n') print(GR + ' [*] Importing links...') po = web.split('//')[1] p = 'tmp/logs/' + po + '-logs/' + po + '-links.lst' links = [web] for w in links: print(GR + ' [*] Making the request...') req = requests.get(w).content print(C + ' [!] Setting parse parameters...') comments = re.findall('<!--(.*)-->', req) print(O + " [+] Searching for comments on page:" + C + color.TR3 + C + G + web + C + color.TR2 + C + '\n') for comment in comments: print(C + ' ' + comment) time.sleep(0.03) found = 0x01 soup = BeautifulSoup(req, 'lxml') for line in soup.find_all('a'): newline = line.get('href') try: if newline[:4] == "http": if web in newline: urls.append(str(newline)) elif newline[:1] == "/": combline = web + newline urls.append(str(combline)) except: pass print(R + ' [-] Unhandled Exception Occured!') try: for uurl in urls: print(O + "\n [+] Searching for comments on page: " + C + color.TR3 + C + G + uurl + C + color.TR2 + C + '\n') req = requests.get(uurl) comments = re.findall('<!--(.*)-->', req.text) for comment in comments: print(C + ' ' + comment) time.sleep(0.03) except wrn.exceptions: print(R + ' [-] Outbound Query Exception...') if found == 0x00: print(R + ' [-] No comments found in source code!') print(G + ' [+] Comments Scraping Done!' + C + color.TR2 + C)
def serverdetect(web): name = targetname(web) lvl2 = "serverdetect" module = "ReconANDOSINT" lvl1 = "Active Reconnaissance" lvl3 = "" requests = session() #print(R+'\n ===========================') #print(R+' D E T E C T S E R V E R') #print(R+' ===========================\n') from core.methods.print import posintact posintact("detect server") time.sleep(0.4) print(GR + ' [*] Checking server status...') web = web.replace('https://', '') web = web.replace('http://', '') try: ip_addr = socket.gethostbyname(web) print(G + ' [+] Server detected online...' + C + color.TR2 + C) time.sleep(0.5) print(O + ' [+] Server IP :>' + C + color.TR3 + C + G + ip_addr + C + color.TR2 + C) data = "IP: " + ip_addr save_data(database, module, lvl1, lvl2, lvl3, name, data) except: print(R + ' [-] Server seems down...') print(GR + ' [*] Trying to identify backend...') time.sleep(0.4) web = 'http://' + web try: r = requests.get(web) header = r.headers['Server'] if 'cloudflare' in header: print(C + ' [+] The website is behind Cloudflare.') print(G + ' [+] Server : Cloudflare' + C + color.TR2 + C) time.sleep(0.4) print( O + ' [+] Use the "Cloudflare" VulnLysis module to try bypassing Clouflare...' + C) else: print(G + ' [+] Server : ' + header + C + color.TR2 + C) data = "Server: " + header save_data(database, module, lvl1, lvl2, lvl3, name, data) try: print(O + ' [+] Running On :' + C + color.TR3 + C + G + r.headers['X-Powered-By'] + C + color.TR2 + C) data = "Running On: " + r.headers['X-Powered-By'] save_data(database, module, lvl1, lvl2, lvl3, name, data) except: pass except: print(R + ' [-] Failed to identify server. Some error occured!') pass
def cms(web): #print(R+'\n =========================') #print(R+' C M S D E T E C T O R') #print(R+' =========================\n') from core.methods.print import posintact posintact("cms detector") time.sleep(0.4) print(GR + ' [*] Parsing the web URL... ') time.sleep(0.4) print(C + ' [!] Initiating Content Management System Detection!') getcmslook(web) cmsenum(web) if dtect == False: print(R + " [-] " + O + web + R + " doesn't seem to use a CMS") print(G + ' [+] CMS Detection Module Completed!' + C + color.TR2 + C)
def phpinfo(web): name = targetname(web) lvl2 = "phpinfo" module = "ReconANDOSINT" lvl1 = "Active Reconnaissance" lvl3 = "" requests = session() found = 0x00 #print(R+'\n =============================') #print(R+' P H P I N F O F I N D E R') #print(R+' =============================\n') from core.methods.print import posintact posintact("phpinfo finder") print(GR + ' [*] Importing file paths...') if os.path.exists('files/fuzz-db/phpinfo_paths.lst'): with open('files/fuzz-db/phpinfo_paths.lst', 'r') as paths: for path in paths: path = '/' + path.replace('\n', '') pathsinfo.append(path) print(C + ' [!] Starting bruteforce...') for p in pathsinfo: web0x00 = web + p req = requests.get(web0x00, allow_redirects=False, verify=False) if (req.status_code == 200 or req.status_code == 302): if re.search( r'\<title\>phpinfo()\<\/title\>|\<h1 class\=\"p\"\>PHP Version', req.content): found = 0x01 print(O + ' [+] Found PHPInfo File At :' + C + color.TR3 + C + G + web0x00 + C + color.TR2 + C) data = "phpinfo @ " + web0x00 save_data(database, module, lvl1, lvl2, lvl3, name, data) else: print(B + ' [*] Checking : ' + C + web0x00 + R + ' (' + str(req.status_code) + ')') if found == 0x00: print(R + '\n [-] Did not find PHPInfo file...\n') save_data(database, module, lvl1, lvl2, lvl3, name, "Did not find PHPInfo file.") else: print(R + ' [-] Bruteforce filepath not found!')
def sslcert(web): name = targetname(web) lvl2 = "sslcert" module = "ReconANDOSINT" lvl1 = "Active Reconnaissance" lvl3 = "" if 'https' not in web: print(R + ' [-] Website does not use SSL...') else: if str(web).split("/")[2]: web = str(web).split("/")[2] elif str(web).split("/")[3]: web = str(web).split("/")[2] cerp = [] #print(R+'\n =========================================') #print(R+' S S L C E R T I F I C A T E I N F O') #print(R+' =========================================\n') from core.methods.print import posintact posintact("ssl certificate info") time.sleep(0.3) context = ssl.create_default_context() server = context.wrap_socket(socket.socket(), server_hostname=web) server.connect((web, 443)) cer = server.getpeercert() cerpec = server.cipher() for x in cerpec: cerp.append(x) sn = str(cer.get('serialNumber')) vers = str(cer.get('version')) cs = str(cerp[0]) proto = str(cerp[1]) etype = str(cerp[2]) print(B + " [+] Certificate Serial Number : " + W + sn) print(B + " [+] Certificate SSL Version : " + W + vers) print(B + ' [+] SSL Cipher Suite : ' + W + cs) print(B + ' [+] Encryption Protocol : ' + W + proto) print(B + ' [+] Encryption Type : ' + W + etype + ' bit') print(B + ' [+] Certificate as Raw : \n' + W + str(cer)) data = "Serial Number :> {}\nVersion :> {}\nCipher Suite :> {}\n".format( sn, vers, cs) data = data + "Encryption Protocol :> {}\nEncryption Type :> {}\n\n{}".format( proto, etype, str(cer)) save_data(database, module, lvl1, lvl2, lvl3, name, data)
def serverdetect(web): #print(R+'\n ===========================') #print(R+' D E T E C T S E R V E R') #print(R+' ===========================\n') from core.methods.print import posintact posintact("detect server") time.sleep(0.4) print(GR + ' [*] Checking server status...') web = web.replace('https://', '') web = web.replace('http://', '') try: ip_addr = socket.gethostbyname(web) print(G + ' [+] Server detected online...') time.sleep(0.5) print(G + ' [+] Server IP :> ' + ip_addr) except: print(R + ' [-] Server seems down...') print(GR + ' [*] Trying to identify backend...') time.sleep(0.4) web = 'http://' + web try: r = requests.get(web) header = r.headers['Server'] if 'cloudflare' in header: print(O + ' [+] The website is behind Cloudflare.') print(G + ' [+] Server : Cloudflare') time.sleep(0.4) print( O + ' [+] Use the "Cloudflare" VulnLysis module to try bypassing Clouflare...' ) else: print(B + ' [+] Server : ' + C + header) try: print(O + ' [+] Running On : ' + G + r.headers['X-Powered-By']) except: pass except: print(R + ' [-] Failed to identify server. Some error occured!') pass
def robot(web): name = targetname(web) lvl2 = "robot" module = "ReconANDOSINT" lvl1 = "Active Reconnaissance" lvl3 = "" requests = session() #print(R+'\n =============================') #print(R+' R O B O T S C H E C K E R') #print(R+' =============================\n') from core.methods.print import posintact posintact("robots checker") url = web + '/robots.txt' print(' [!] Testing for robots.txt...\n') try: resp = requests.get(url).text m = str(resp) print(G + ' [+] Robots.txt found!' + C + color.TR2 + C) print(GR + ' [*] Displaying contents of robots.txt...') print(color.END + m + C) data = ">> robots.txt:\n" + m save_data(database, module, lvl1, lvl2, lvl3, name, data) except Exception: print(R + ' [-] Robots.txt not found') save_data(database, module, lvl1, lvl2, lvl3, name, "robots.txt not found.") print(' [!] Testing for sitemap.xml...\n') url0 = web + '/sitemap.xml' try: resp = requests.get(url0).text m = str(resp) print(G + ' [+] Sitemap.xml found!' + C + color.TR2 + C) print(GR + ' [*] Displaying contents of sitemap.xml') print(color.END + m + C) data = ">> sitemap.xml:\n" + m save_data(database, module, lvl1, lvl2, lvl3, name, data) except Exception: print(R + ' [-] Sitemap.xml not found') save_data(database, module, lvl1, lvl2, lvl3, name, "sitemap.xml not found.")
def grabhead(web): time.sleep(0.4) #print(R+'\n ==================================') #print(R+' G R A B H T T P H E A D E R S') #print(R+' ===================================\n') from core.methods.print import posintact posintact("grab http headers") print(GR + color.BOLD + ' [*] Grabbing HTTP Headers...') time.sleep(0.4) web = web.rstrip() try: header = str(urllib.request.urlopen(web).info()).splitlines() print('') for m in header: n = m.split(':') print(' ' + C + n[0] + ': ' + O + n[1]) print('') except urllib.error.HTTPError as e: print(R + ' [-] ' + e.__str__())
def apachestat(web): name = targetname(web) requests = session() lvl2 = "apachestat" module = "ReconANDOSINT" lvl1 = "Active Reconnaissance" lvl3 = "" flag = 0x00 time.sleep(0.7) #print(R+'\n ===========================') #print(R+' A P A C H E S T A T U S ') #print(R+' ===========================\n') from core.methods.print import posintact posintact("apache status") print(C + ' [*] Importing fuzz parameters...') time.sleep(0.7) print(GR + ' [*] Initializing bruteforce...') with open('files/fuzz-db/apachestat_paths.lst', 'r') as paths: for path in paths: path = path.replace('\n', '') url = web + path print(B + ' [+] Trying : ' + C + url) resp = requests.get(url, allow_redirects=False, verify=False, timeout=7) if resp.status_code == 200 or resp.status_code == 302: print(O + ' [+] Apache Server Status Enabled at :' + C + color.TR3 + C + G + url + C + color.TR2 + C) flag = 0x01 save_data(database, module, lvl1, lvl2, lvl3, name, url) if flag == 0x00: save_data(database, module, lvl1, lvl2, lvl3, name, "No server status enabled.") print(R + ' [-] No server status enabled!') print(C + ' [+] Apache server status completed!\n')
def cms(web): global lvl1, lvl2, lvl3, module lvl2 = "cms" module = "ReconANDOSINT" lvl1 = "Active Reconnaissance" lvl3 = "" name = targetname(web) #print(R+'\n =========================') #print(R+' C M S D E T E C T O R') #print(R+' =========================\n') from core.methods.print import posintact posintact("cms detector") time.sleep(0.4) print(GR + ' [*] Parsing the web URL... ') time.sleep(0.4) print(C + ' [!] Initiating Content Management System Detection!') getcmslook(web, name) cmsenum(web, name) if dtect == False: print(R + " [-] " + O + web + R + " doesn't seem to use a CMS") save_data(database, module, lvl1, lvl2, lvl3, name, "No CMS detected.") print(G + ' [+] CMS Detection Module Completed!' + C + color.TR2 + C)
def subdom(web): name = targetname(web) global fileo if 'http' in web: web = web.replace('http://', '') web = web.replace('https://', '') webb = web if "@" in web: webb = web.split("@")[1] fileo = 'tmp/logs/' + webb + '-logs/' + str(webb) + '-subdomains.lst' p = open(fileo, 'w+') p.close #print(R+'\n =====================================') #print(R+' S U B D O M A I N G A T H E R E R') #print(R+' =====================================\n') from core.methods.print import posintact posintact("subdomain gatherer") time.sleep(0.7) print(B + ' [*] Initializing Step [1]...') subdombrute(web) print(C + '\n [+] Module [1] Bruteforce Completed!\n') print(B + ' [*] Initializing Step [2]...') outer(web) print(C + ' [+] Module [2] API Retriever Completed!\n') acc = report(web, found, final) print(C + ' [*] Writing found subdomains to DB...') lvl2 = "subdom" module = "ReconANDOSINT" lvl1 = "Active Reconnaissance" lvl3 = "" if acc: for pwn in acc: save_data(database, module, lvl1, lvl2, lvl3, name, str(pwn)) else: save_data(database, module, lvl1, lvl2, lvl3, name, "No subdomains found for " + web) print(C + ' [+] Done!')
def httpmethods(web): name = targetname(web) lvl2 = "httpmethods" module = "ReconANDOSINT" lvl1 = "Active Reconnaissance" lvl3 = "" try: #print(R+'\n =========================') #print(R+' H T T P M E T H O D S ') #print(R+' =========================\n') from core.methods.print import posintact posintact("http methods") print(GR + ' [*] Parsing Url...') time.sleep(0.7) web = web.replace('https://', '') web = web.replace('http://', '') print(C + ' [!] Making the connection...') conn = http.client.HTTPConnection(web) conn.request('OPTIONS', '/') response = conn.getresponse() q = str(response.getheader('allow')) if 'None' not in q: print(G + ' [+] The following HTTP methods are allowed...' + C + color.TR2 + C) methods = q.split(',') for method in methods: print(O + ' ' + method + C) save_data(database, module, lvl1, lvl2, lvl3, name, q) else: print(R + ' [-] HTTP Method Options Request Unsuccessful...') save_data(database, module, lvl1, lvl2, lvl3, name, "HTTP Method Options Request Unsuccessful.") except Exception as e: print(R + ' [-] Exception Encountered!') print(R + ' [-] Error : ' + str(e))
def piwebenum(web): requests = session() time.sleep(0.4) web = web.split('//')[1] #print(R+'\n =============================================') #print(R+' P I N G / N P I N G E N U M E R A T I O N') #print(R+' =============================================\n') from core.methods.print import posintact posintact("(n)ping enumeration") print(GR + ' [!] Pinging website...') time.sleep(0.5) print(O + ' [*] Using adaptative ping and debug mode with count 5...') time.sleep(0.4) print(GR + ' [!] Press Ctrl+C to stop\n' + C) os.system('ping -D -c 5 ' + web) print('') time.sleep(0.6) print(O + ' [*] Trying NPing (NMap Ping)...') print(C + " [~] Result: \n") print('') text = requests.get('http://api.hackertarget.com/nping/?q=' + web).text nping = str(text) print(G + nping + '\n')
def dav(web): name = targetname(web) time.sleep(0.7) #print(R+'\n =========================================') #print(R+' D A V H T T P E N U M E R A T I O N') #print(R+' =========================================\n') from core.methods.print import posintact posintact("dav http enumeration") time.sleep(0.7) print(C + ' [!] Loading HTTP methods...') global pro, sr pro = 'PROFIND' sr = 'SEARCH' print(GR + '\n [*] Initiating HTTP Search module...') htsearch(web, name) print(C + ' [+] HTTP Search module Completed!') time.sleep(1) print(GR + '\n [*] Initiating HTTP Profind Moule...') profind(web, name) print(C + ' [+] HTTP Profind Module Completed!') print(G + ' [+] HTTP Profiling of DAV Completed!' + C + color.TR2 + C + '\n')
def traceroute(web): #print(R+'\n =====================') #print(R+' T R A C E R O U T E') #print(R+' =====================\n') from core.methods.print import posintact posintact("traceroute") web = web.replace('https://', '') web = web.replace('http://', '') m = input(C + ' [?] Do you want to fragment the packets? (y/n) :> ') if m == 'y' or m == 'Y': print(GR + ' [!] Using fragmented packets...') p = input(C + ' [§] Enter the network type to be used [(I)CMP/(T)CP] :> ') if p == 'icmp' or p == 'ICMP' or p == 'I' or p == 'i': print(GR + ' [*] Using ICMP ECHO type for traceroute...') w = input(C + ' [*] Enable socket level debugging? (y/n) :> ') if w == 'y' or w == 'Y': print(GR + ' [+] Enabling socket level debugging...') sleep(0.3) print(GR + ' [+] Starting traceroute...' + C) os.system('traceroute -I -d ' + web) elif w == 'n' or w == 'N': sleep(0.3) print(GR + ' [+] Starting traceroute...' + C) os.system('traceroute -I ' + web) else: print(R + ' [-] Invalid choice...') traceroute(web) elif p == 'tcp' or p == 'TCP' or p == 't' or p == 'T': print(GR + ' [*] Using TCP/SYN for traceroute...') w = input(C + ' [*] Enable socket level debugging? (y/n) :> ') if w == 'y' or w == 'Y': print(GR + ' [+] Enabling socket level debugging...') sleep(0.3) print(GR + ' [+] Starting traceroute...' + C) os.system('traceroute -T -d ' + web) elif w == 'n' or w == 'N': sleep(0.3) print(GR + ' [+] Starting traceroute...' + C) os.system('traceroute -T ' + web) else: print(R + ' [-] Invalid choice...') traceroute(web) else: print(R + ' [-] Invalid choice...') traceroute(web) elif m == 'n' or m == 'N': print(GR + ' [!] Using unfragmented packets...') p = input(C + ' [§] Enter the network type to be used (ICMP/TCP) :> ') if p == 'icmp' or p == 'ICMP' or p == 'I' or p == 'i': print(GR + ' [*] Using ICMP ECHO type for traceroute...') w = input(C + ' [*] Enable socket level debugging? (y/n) :> ') if w == 'y' or w == 'Y': print(GR + ' [+] Enabling socket level debugging...') sleep(0.3) print(GR + ' [+] Starting traceroute...' + C) os.system('traceroute -I -d -F ' + web) elif w == 'n' or w == 'N': sleep(0.3) print(GR + ' [+] Starting traceroute...' + C) os.system('traceroute -I -F ' + web) else: print(R + ' [-] Invalid choice...') traceroute(web) elif p == 'tcp' or p == 'TCP' or p == 't' or p == 'T': print(GR + ' [*] Using TCP/SYN for traceroute...') w = input(C + ' [*] Enable socket level debugging? (y/n) :> ') if w == 'y' or w == 'Y': print(GR + ' [+] Enabling socket level debugging...') sleep(0.3) print(GR + ' [+] Starting traceroute...' + C) os.system('traceroute -T -d -F ' + web) elif w == 'n' or w == 'N': sleep(0.3) print(GR + ' [+] Starting traceroute...' + C) os.system('traceroute -T -F ' + web) else: print(R + ' [-] Invalid choice...') traceroute(web) else: print(R + ' [-] Invalid choice...') traceroute(web) else: print(R + ' [-] Invalid choice...') traceroute(web) print(G + ' [+] Traceroute done.' + C + color.TR2 + C + '\n')
def filebrute(web): print(GR + ' [*] Loading module...') print(B + " [!] Module Selected : " + C + "Bruteforce Modules\n\n") time.sleep(0.7) #print(R+'\n ==================================') #print(R+' B R U T E F O R C E R E C O N.') #print(R+' ==================================\n') from core.methods.print import posintact posintact("bruteforce recon") print(O + ' Choose from the following options :\n') print(B + ' [1]' + C + ' Common Backdoor Paths ' + W + ' (.shell, c99.php, etc)') print(B + ' [2]' + C + ' Common Backup Locations' + W + ' (.bak, .db, etc)') print(B + ' [3]' + C + ' Common Dot Files' + W + ' (.phpinfo, .htaccess, etc)') print(B + ' [4]' + C + ' Common Password Paths' + W + ' (.skg, .pgp etc)') print(B + ' [5]' + C + ' Common Proxy Config. Locations' + W + ' (.pac, etc)') print(B + ' [6]' + C + ' Multiple Index Locations' + W + ' (index1, index2, etc)') print(B + ' [7]' + C + ' Common Log Locations' + W + ' (.log, changelogs, etc)\n') print(B + ' [A]' + C + ' The Auto-Awesome Module\n') print(B + ' [99]' + C + ' Back\n') time.sleep(0.3) v = input(O + ' [#] \033[1;4mTID\033[0m' + O + ' :> ' + color.END) print('') if v.strip() == '1': print(B + ' [!] Type Selected :' + C + ' Backdoor Brute') backbrute(web) input(O + ' [#] Press ' + GR + 'Enter' + O + ' to continue...') print('\n\n') filebrute(web) elif v.strip() == '2': print(B + ' [!] Type Selected :' + C + ' Backup Brute') backupbrute(web) input(O + ' [#] Press ' + GR + 'Enter' + O + ' to continue...') print('\n\n') filebrute(web) elif v.strip() == '3': print(B + ' [!] Type Selected :' + C + ' Dot File Brute') dotbrute(web) input(O + ' [#] Press ' + GR + 'Enter' + O + ' to continue...') print('\n\n') filebrute(web) elif v.strip() == '4': print(B + ' [!] Type Selected :' + C + ' Password Brute') passbrute(web) input(O + ' [#] Press ' + GR + 'Enter' + O + ' to continue...') print('\n\n') filebrute(web) elif v.strip() == '5': print(B + ' [!] Type Selected :' + C + ' Proxy Brute') proxybrute(web) input(O + ' [#] Press ' + GR + 'Enter' + O + ' to continue...') print('\n\n') filebrute(web) elif v.strip() == '6': print(B + ' [!] Type Selected :' + C + ' Multiple Indices') indexmulbrute(web) input(O + ' [#] Press ' + GR + 'Enter' + O + ' to continue...') print('\n\n') filebrute(web) elif v.strip() == '7': print(B + ' [!] Type Selected :' + C + ' Log Locations') logbrute(web) input(O + ' [#] Press ' + GR + 'Enter' + O + ' to continue...') print('\n\n') filebrute(web) elif v == 'A': print(B + ' [!] Type Selected :' + C + ' All Modules') time.sleep(0.5) print(B + ' [*] Firing up module -->' + C + ' Backdoor Brute') backbrute(web) print(B + ' [!] Module Completed -->' + C + ' Backdoor Brute\n') time.sleep(2) print(B + ' [*] Firing up module -->' + C + ' Backup Brute') backupbrute(web) print(B + ' [!] Module Completed -->' + C + ' Backup Brute\n') time.sleep(2) print(B + ' [*] Firing up module -->' + C + ' Dot Brute') dotbrute(web) print(B + ' [!] Module Completed -->' + C + ' Dot Brute\n') time.sleep(2) print(B + ' [*] Firing up module -->' + C + ' Pass Brute') passbrute(web) print(B + ' [!] Module Completed -->' + C + ' Pass Brute\n') time.sleep(2) print(B + ' [*] Firing up module -->' + C + ' Proxy Brute') proxybrute(web) print(B + ' [!] Module Completed -->' + C + ' Proxy Brute\n') time.sleep(2) print(B + ' [*] Firing up module -->' + C + ' Multiple Index Brute') indexmulbrute(web) print(B + ' [!] Module Completed -->' + C + ' Multiple Index Brute\n') time.sleep(2) print(B + ' [*] Firing up module -->' + C + ' Log Brute') logbrute(web) print(B + ' [!] Module Completed -->' + C + ' Log Brute\n') time.sleep(2) print(B + ' [!] All scantypes have been tested on target...') time.sleep(4) input(O + ' [#] Press ' + GR + 'Enter' + O + ' to continue...') print(B + ' [*] Going back to menu...') elif v == '99': print(B + ' [*] Back to the menu !') else: dope = [ 'You high dude?', 'Shit! Enter a valid option', 'Whoops! Thats not an option', 'Sorry! You just typed shit' ] print(' [-] ' + dope[randint(0, 3)]) time.sleep(0.7) os.system('clear') filebrute(web)