def main(): Exit = 0 language = input( "Please select your language english(0),spanish(1), french(2) or a diferent language(3)\n" ) messages, library = fn.language_menu(language) while Exit != 1: while True: try: search_or_add_input = int(input(messages[1])) if search_or_add_input == 0: os.system('cls') title = input(messages[2] + '') author = input(messages[3] + '') year = input(messages[4] + '') location = input(messages[5] + '') descritpion = input(messages[6] + '') language_input = input(messages[23] + '') state = input(messages[24] + '') fn.add(title, author, year, location, descritpion, language_input, state) elif search_or_add_input == 1: os.system('cls') while True: category = input(messages[14]) try: category = int(category) if category not in range(7): print(messages[0]) continue break except: os.system('cls') print(messages[0]) continue book = input(messages[15]) fn.search(category, book, language) else: os.system('cls') print(messages[0]) continue break except: os.system('cls') print(messages[0]) while True: Exit = input(messages[18]) try: Exit = int(Exit) if Exit not in range(2): print(messages[0]) continue break except: os.system('cls') print(messages[0]) os.system('cls') print('\n' + messages[17]) time.sleep(8)
def test_search(): """Tests that output is a dictionary that contains the message""" assert callable(search) assert isinstance(search("BILL IS WATCHING"), dict) assert search("BILL IS WATCHING") == {'title': 'Gideon Rises', 'message': 'BILL IS WATCHING', 'crypt': 'ELOO LV ZDWFKLQJ', 'where': 'In pipes of Mystery Shack', 'type': 'Caesar'}
def lis(): voice=ls.recd() if (voice=="Err-1"): playsound("./Media/beep-10.mp3") sp.spk("Couldn't get that") elif (voice=='Err-2'): playsound("./Media/beep-10.mp3") sp.spk("Connection to Google Broken") else: words=voice.split(' ') fn.search(words[0],words[1])
def main(): choice = input("(S)earch, (N)ame, (I)nfo, or Q(uit)\n").lower() if choice == "i": info(ask_info()) main() elif choice == "s": stuff = ask_value(ask_info()) search(stuff[0], stuff[1]) main() elif choice == "n": search_name() main() elif choice == "q": sys.exit() else: main()
def main(): print('Welcome y\'all...\n', '-' * 15, sep='') search_term = input('What\'s ur search term: ') total, results = search(search_term) print('\nThe top {} matches are:\n'.format(total)) for i, anime in enumerate(results, start=1): print('{}.\t{}'.format(i, anime.get('title'))) anime = results[int(input('Enter ur anime number: ')) - 1] file_path = os.path.join('links', anime['slug'] + '.txt') print('There are {} episodes for this anime'.format(anime['episodes'])) print('Download links will be saved in {}'.format(file_path)) episodes = get_episodes(anime['id'])[1] episodes.reverse() with open(file_path, 'w') as f: for i, episode in enumerate(episodes, 1): #print('Getting link for episode', i) embed_link = get_embed_links('https://animepahe.com/play/' + anime['slug'] + '/' + episode['session']) down_link = get_down_link(embed_link) f.write(down_link + '\n ') #print('Got link for episode', i) f.close() print('That\'ll be all... Thank you :D')
def edit(book): surname = input("Input abonent's surname >>") name = input("Input abonent's name >>") result = search(book, surname=surname, name=name) if len(result) == 0: print("I can not find this Abonent. Please, try again") return index = 0 for i in range(len(book)): if book[i] == result[0]: index = i print("Hello! You will be promted to edit user information. Input \"-\", if you don't want to edit field") surname = input("Input abonent's surname >>") if not decoratorMinusFriendly(isName,surname): print("Incorrect surname!") return name = input("Input abonent's name >>") if not decoratorMinusFriendly(isName,name): print("Incorrect name!") return p_number = input("Input abonent's phone number >>") if not decoratorMinusFriendly(isNumber,p_number): print("Incorrect phone number!") return birth_date = input("Input abonent's birth date >>") if not decoratorMinusFriendly(isData,birth_date): print("Incorrect date!") return editing(book, index, surname=surname, name=name, pnumb=p_number, date=birth_date) print("Editing was successfull")
def news_detail(request, title): """ Gibt alle Felder einer bestimmten Nachricht; gesucht und gefundene Tablet-Computer für die Nachrichts-Seite zurück. """ title = title.replace('-', ' ') news = News.objects.get(title=title) news_title = news.title news_intro = news.intro news_text = news.text news_date = news.date news_pic = [news.pic1, news.pic2] news_brand = Brand.objects.filter(news__title=title) news_tablet = TabletPC.objects.filter(news__title=title) query_string, found_entries = search(request) return render_to_response( 'news.html', {'query_string': query_string, 'found_entries': found_entries, 'news_title': news_title, 'news_intro': news_intro, 'news_text': news_text, 'news_date': news_date, 'news_pic': news_pic, 'news_brand': news_brand, 'news_tablet': news_tablet}, context_instance=RequestContext(request))
def main_loop(): while True: system('clear') driver = init_driver(URL) header(TITLE) key_word = input("[+] Search: ") exit_check(key_word, driver) driver = search(driver, key_word) titles, links = get_search_results(driver) choice = browse_loop(titles, links, 'browse') driver = init_driver(links[int(choice) - 1]) episode_titles, episode_links = get_episodes(driver) episode_choice = browse_loop(episode_titles, episode_links, 'episodes') driver = init_driver(episode_links[int(episode_choice) - 1]) video_url = get_video_url(driver) driver = init_driver(video_url) dl_url, status = get_video(driver) if status: download_video(dl_url, titles[int(choice) - 1], episode_titles[int(episode_choice) - 1]) else: input("[+] Press any key to continue...") continue
def show_brand(request, name): """ Gibt den Markennamen, die Information, das Logo, die entsprechenden Nachrichten und Tablet-Computer; gesucht und gefundene Tablet-Computer für die Marken-Seite zurück. """ try: name = name.replace('-', ' ').title() brand = Brand.objects.get(name=name) brand_name = brand.name brand_info = brand.info brand_logo = brand.logo brand_tablet = TabletPC.objects.filter(brand__name=name) brand_news = News.objects.filter(brand__name=name) except Brand.DoesNotExist: brand = brand_name = brand_info = None brand_logo = brand_tablet = brand_news = None query_string, found_entries = search(request) return render_to_response( 'brand.html', {'name': name, 'brand_name': brand_name, 'brand_info': brand_info, 'brand_news': brand_news, 'brand_logo': brand_logo, 'brand_tablet': brand_tablet, 'query_string': query_string, 'found_entries': found_entries}, context_instance=RequestContext(request))
def tablet_detail(request, name): """ Gibt alle Felder eines bestimmten Tablet-Computers; gesucht und gefundene Tablet-Computer für die Tablet-Seite zurück. """ name = name.replace('-', ' ') tablet = TabletPC.objects.get(name=name) tablet_name = tablet.name tablet_intro = tablet.intro tablet_info = tablet.info tablet_os = tablet.os tablet_link = tablet.link tablet_pic = [tablet.pic1, tablet.pic2, tablet.pic3, tablet.pic4] tablet_brand = tablet.brand tablet_news = News.objects.filter(tablet__name=name) query_string, found_entries = search(request) return render_to_response( 'tablet.html', {'query_string': query_string, 'found_entries': found_entries, 'tabletpc': tablet, 'tablet_name': tablet_name, 'tablet_intro': tablet_intro, 'tablet_info': tablet_info, 'tablet_os': tablet_os, 'tablet_pic': tablet_pic, 'tablet_link': tablet_link, 'tablet_news': tablet_news, 'tablet_brand': tablet_brand}, context_instance=RequestContext(request))
def show(request): """ Gibt die Marken, die neuesten zwanzig Tablet-Computer Namen; wenn verfügbar, die letzte Nachrichts-Titel, Einführungstext; gesucht und gefundene Tablet-Computer für die Index-Seite zurück. """ brands = Brand.objects.order_by('name') tablets = TabletPC.objects.order_by('-id')[:20] try: news = News.objects.order_by('-id')[:1].get() news_title = news.title news_intro = news.intro except News.DoesNotExist: news = news_title = news_intro = None query_string, found_entries = search(request) return render_to_response( 'index.html', {'query_string': query_string, 'found_entries': found_entries, 'brands': brands, 'tablets': tablets, 'news_title': news_title, 'news_intro': news_intro}, context_instance=RequestContext(request))
def choose_menupoint(): inputs = ui.get_inputs("Please enter a number: ", "") option = inputs[0] if option == "1": my_events.start() elif option == "2": join_events.start() elif option == "3": functions.search() elif option == "4": functions.show_random_events() elif option == "5": functions.join_event() elif option == "0": sys.exit(0) else: raise KeyError("There is no such option.")
def delete(book): response = input("What way are you choose?\n input \"1\" ---> deletion by name+surname\n input \"2\" ---> deletion by the phone number\n>>>") if response.strip() == '1': surname = input("Input abonent's surname >>") name = input("Input abonent's name >>") if not isNameAndSurname(surname, name): return ab_arr = search(book, surname=surname, name=name) if len(ab_arr) == 0: print("I can not find this Abonent. Please, try again") return book.remove(ab_arr[0]) print("Abonent was deleted successfully") elif response.strip() == '2': p_number = input("Input abonent's phone number >>") if not isNumber(p_number): print("Incorrect number!") return ab_arr = search(book, pnumb=p_number) if len(ab_arr) == 0: print("I can not find this Abonent. Please, try again") return elif len(ab_arr) == 1: book.remove(ab_arr[0]) print("Abonent was deleted successfully") return else: print( "I found several Abonents. Which one of them are you want to delete? Specify the numbers (start from 1)(splitting by space)") printBook(ab_arr) response = input() response.strip() response = response.split(' ') N = len(ab_arr) for i in range(len(response)): if not response[i].isdigit(): print("I don't understand(((((") return response[i] = int(response[i]) for index in response: if index <= N: book.remove(ab_arr[index - 1]) print("Abonent was deleted successfully") else: print("I don't understand(((((")
def clientthread(conn, addr): conn.send( 'Service Started. Choose an option Useing Uppercase letter\n #(R)egister\n #(U)pload files\n #(S)earch for a file\n #(E)xit' ) onlinePeers.append(addr[0]) while True: data = conn.recv(1024) if not data: continue elif data.split('\n')[0] == 'REGISTER': functions.register(conn, addr, data.split('\n')[1], str(addr[1])) # functions.register(conn, addr, "client" + str(addr[1])) elif data.split('\n')[0] == 'SHARE_FILES': functions.share(conn, addr, data.split('\n')[1], str(addr[1])) elif data.split('\n')[0] == 'SEARCH': functions.search(conn, addr, data.split('\n')[1], onlinePeers) onlinePeers.remove(addr[0]) conn.close()
def clientthread(conn,addr): conn.send('Welcome to the server. Select an option\n 1. (R)egister\n 2. (U)pload files\n 3. (S)earch for a file\n 4. (E)xit') activePeers.append(addr[0]) #print activePeers while 1: data = conn.recv(1024) #reply = 'OK...' + data if not data: break if data.split('\n')[0] == 'REGISTER': functions.register(conn, addr, data.split('\n')[1]) elif data.split('\n')[0] == 'SHARE_FILES': functions.share(conn,addr,data.split('\n')[1]) elif data.split('\n')[0] == 'SEARCH': functions.search(conn,addr,data.split('\n')[1],activePeers) elif data == 'TEST': functions.checkDB(conn) activePeers.remove(addr[0]) conn.close()
def find(book): print( "Hello! You will be promted to fill in user information. Input \"-\", if you don't want to specify field for search") surname = input("Input abonent's surname >>") name = input("Input abonent's name >>") p_number = input("Input abonent's phone number >>") birth_date = input("Input abonent's birth date >>") result = search(book, surname=surname, name=name, pnumb=p_number, date=birth_date) print("I found...") if len(result)==0: print("Nothing(((") else: printBook(result)
def age(book): surname = input("Input abonent's surname >>") name = input("Input abonent's name >>") if not isNameAndSurname(surname,name): return result = search(book, surname=surname, name=name) if len(result) == 0: print("I can not find this Abonent. Please, try again") return if result[0].age() == -1: print("Age is not setted") else: print(result[0].age())
def clientthread(conn, addr): conn.send( 'Welcome to the server. Select an option\n 1. (R)egister\n 2. (U)pload files\n 3. (S)earch for a file\n 4. (E)xit' ) activePeers.append(addr[0]) #print activePeers while 1: data = conn.recv(1024) #reply = 'OK...' + data if not data: break if data.split('\n')[0] == 'REGISTER': functions.register(conn, addr, data.split('\n')[1]) elif data.split('\n')[0] == 'SHARE_FILES': functions.share(conn, addr, data.split('\n')[1]) elif data.split('\n')[0] == 'SEARCH': functions.search(conn, addr, data.split('\n')[1], activePeers) elif data == 'TEST': functions.checkDB(conn) activePeers.remove(addr[0]) conn.close()
def news_all(request): """ Gibt alle Nachrichteneingaben; gesucht und gefundene Tablet-Computer für die Nachrichten-Seite zurück. """ news = News.objects.all().order_by('-id') query_string, found_entries = search(request) return render_to_response( 'newsall.html', {'query_string': query_string, 'found_entries': found_entries, 'news': news}, context_instance=RequestContext(request))
def show_android(request): """ Gibt alle Tablet-Computer mit Android-Betriebssystem, Nachrichten die Android-Betriebssystem enthalten; gesucht und gefundene Tablet-Computer für die Android-Seite zurück. """ tablets = TabletPC.objects.filter(os__icontains="android") tablet_news = News.objects.filter( Q(intro__icontains="android") | Q(text__icontains="android")) query_string, found_entries = search(request) return render_to_response( 'android.html', {'query_string': query_string, 'found_entries': found_entries, 'tablet_news': tablet_news, 'tablets': tablets}, context_instance=RequestContext(request))
def add(book): surname = input("Input abonent's surname >>") name = input("Input abonent's name >>") if not isNameAndSurname(surname,name): return p_number = input("Input abonent's phone number >>") if not isNumber(p_number): print("Incorrect number!") return birth_date = "" response = input( "Are you want to add Abonent's birth date? Input \"Y\" if you want or input anything else if no >>") if response.strip() == "Y": birth_date = input("Input abonent's birth date >>") if not isData(birth_date): print("Incorrect date!") return ab = search(book, surname=surname, name=name) if len(ab) > 0: print("I found Abonent with same name and surname. What should I do? Input the number") response = input( "\"1\"---> delete old record and add new record\n\"2\"---> change name and surname of new record\n\"3\"---> adort addition\n>>>") if response == '1': book.remove(ab[0]) book.append(Abonent(surname + ';' + name + ';' + p_number + ';' + birth_date)) print("Success") elif response == '2': surname = input("Input new abonent's surname >>") name = input("Input new abonent's name >>") book.append(Abonent(surname + ';' + name + ';' + p_number + ';' + birth_date)) print("Success") elif response == '3': print("Addition aborted") return else: print("Incorrect number!") return else: book.append(Abonent(surname + ';' + name + ';' + p_number + ';' + birth_date)) print("New abonent was added succesfully")
def code_tokenify(arity_list: [(int, TokenLib.Token)]) -> [Code_Token]: code_tokens = [] exprs = [] expr = [] patterns = [ "0", "1", "2", "020", "021", "022", "02", "10", "11", "12", "20", "21", "22", "102", "110", "111", "112", "120", "121", "122", "202", "210", "211", "212", "220", "221", "222" ] pattern = "" while len(arity_list): if (pattern in patterns and pattern + str(arity_list[-1][0]) not in patterns): exprs.append([pattern, expr]) expr = [] pattern = "" pattern += str(arity_list[-1][0]) expr.append(arity_list[-1][1]) arity_list.pop() if expr and pattern in patterns: exprs.append([pattern, expr]) expr = [] pattern = "" for expr in exprs: arity_pattern, data = expr ctkn = None #print(arity_pattern, [m.get_data() for m in data]) if arity_pattern == "0": # Nilad ctkn = Code_Token(self, None, data[0].get_value()) elif arity_pattern == "1": # Monad ctkn = Code_Token(functions.search(data[0]), None, Relative_Argument) elif arity_pattern == "2": # Dyad ctkn = Code_Token(functions.search(data[0]), Relative_Argument, Relative_Argument) elif arity_pattern == "020": # Nilad-Dyad-Nilad ctkn = Code_Token(functions.search(data[1]), data[0].get_value(), data[2].get_value()) elif arity_pattern == "021": # Nilad-Dyad-Monad right = Code_Token(functions.search(data[2]), None, Relative_Argument) ctkn = Code_Token(functions.search(data[1]), data[0].get_value(), right) elif arity_pattern == "02": # Nilad-Dyad ctkn = Code_Token(functions.search(data[1]), data[0].get_value(), Relative_Argument) elif arity_pattern == "10": # Monad-Nilad ctkn = Code_Token(functions.search(data[0]), None, data[1].get_value()) elif arity_pattern == "11": # Monad-Monad right = Code_Token(functions.search(data[1]), None, Relative_Argument) ctkn = Code_Token(functions.search(data[0]), None, right) elif arity_pattern == "12": # Monad-Dyad left = Code_Token(functions.search(data[0]), None, Relative_Argument) ctkn = Code_Token(functions.search(data[1]), left, Relative_Argument) elif arity_pattern == "20": # Dyad-Nilad ctkn = Code_Token(functions.search(data[0]), Relative_Argument, data[1].get_value()) elif arity_pattern == "21": # Dyad-Monad right = Code_Token(functions.search(data[1]), None, Relative_Argument) ctkn = Code_Token(functions.search(data[0]), Relative_Argument, right) elif arity_pattern == "22": # Dyad-Dyad left = Code_Token(functions.search(data[0]), Relative_Argument, Relative_Argument) ctkn = Code_Token(functions.search(data[1]), left, Relative_Argument) elif arity_pattern == "102": # Monad-Nilad-Dyad left = Code_Token(functions.search(data[0]), None, data[1].get_value()) ctkn = Code_Token(functions.search(data[2]), left, Relative_Argument) elif arity_pattern == "110": # Monad-Monad-Nilad right = Code_Token(functions.search(data[1]), None, data[2].get_value()) ctkn = Code_Token(functions.search(data[0]), None, right) elif arity_pattern == "111": # Monad-Monad-Monad right_1 = Code_Token(functions.search(data[2]), None, Relative_Argument) right = Code_Token(functions.search(data[1]), None, right_1) ctkn = Code_Token(functions.search(data[0]), None, right) elif arity_pattern == "120": # Monad-Dyad-Nilad left = Code_Token(functions.search(data[0]), None, Relative_Argument) ctkn = Code_Token(functions.search(data[1]), left, data[2].get_value()) elif arity_pattern == "121": # Monad-Dyad-Monad left = Code_Token(functions.search(data[0]), None, Relative_Argument) right = Code_Token(functions.search(data[2]), None, Relative_Argument) ctkn = Code_Token(functions.search(data[1]), left, right) elif arity_pattern == "122": # Monad-Dyad-Dyad left = Code_Token(functions.search(data[0]), None, Relative_Argument) right = Code_Token(functions.search(data[2]), Relative_Argument, Relative_Argument) ctkn = Code_Token(functions.search(data[1]), left, right) elif arity_pattern == "202": # Dyad-Nilad-Dyad left = Code_Token(functions.search(data[0]), Relative_Argument, data[1].get_value()) ctkn = Code_Token(functions.search(data[2]), left, Relative_Argument) elif arity_pattern == "210": # Dyad-Monad-Nilad right = Code_Token(functions.search(data[1]), None, data[2].get_value()) ctkn = Code_Token(functions.search(data[0]), Relative_Argument, right) elif arity_pattern == "211": # Dyad-Monad-Monad right_1 = Code_Token(functions.search(data[2]), None, Relative_Argument) right = Code_Token(functions.search(data[1]), None, right_1) ctkn = Code_Token(functions.search(data[0]), Relative_Argument, right) elif arity_pattern == "212": # Dyad-Monad-Dyad right_1 = Code_Token(functions.search(data[1]), None, Relative_Argument) left = Code_Token(functions.search(data[0]), Relative_Argument, right_1) ctkn = Code_Token(functions.search(data[2]), left, Relative_Argument) elif arity_pattern == "220": # Dyad-Dyad-Nilad left = Code_Token(functions.search(data[0]), Relative_Argument, Relative_Argument) ctkn = Code_Token(functions.search(data[1]), left, data[2].get_value()) elif arity_pattern == "221": # Dyad-Dyad-Monad left = Code_Token(functions.search(data[0]), Relative_Argument, Relative_Argument) right = Code_Token(functions.search(data[2]), None, Relative_Argument) ctkn = Code_Token(functions.search(data[1]), left, right) elif arity_pattern == "222": # Triple Dyad left_1 = Code_Token(functions.search(data[0]), Relative_Argument, Relative_Argument) left = Code_Token(functions.search(data[1]), left_1, Relative_Argument) ctkn = Code_Token(functions.search(data[2]), left, Relative_Argument) code_tokens.append(ctkn) return code_tokens
def answer_process(): if not request.args: return redirect(url_for('index')) some_id = request.args.get('some_id') if some_id: owner_list = [] owner_stats = db.session.query(Posts.wall_owner).all() for item in owner_stats: owner_list.append(str(item[0])) if str(some_id) in owner_list: w = 'Информация об этом пользователе уже есть.' return render_template('search.html', owner_id=some_id, w=w) else: info = search(some_id) if len(info) == 2: return render_template(info[0], message=info[1]) else: all_comments, all_posts, all_authors = info[0], info[1], info[ 2] com_df = pd.DataFrame(all_comments) post_df = pd.DataFrame(all_posts) auth_df = pd.DataFrame(all_authors) # posts for index, row in post_df.iterrows(): id = row['id'] author_id = row['user_id'] text = row['text'] lem_text = row['lem_text'] likes = row['likes'] date_time = row['date_time'] weekday = row['weekday'] wall_owner = row['wall_owner'] post = Posts(id=id, author_id=author_id, text=text, lem_text=lem_text, likes=likes, date_time=date_time, weekday=weekday, wall_owner=wall_owner) try: db.session.add(post) db.session.commit() db.session.refresh(post) except sqlalchemy.exc.InvalidRequestError: db.session.rollback() # comments for index, row in com_df.iterrows(): id = row['id'] post_id = row['post_id'] author_id = row['author_id'] text = row['text'] lem_text = row['lem_text'] likes = row['likes'] date_time = row['date_time'] weekday = row['weekday'] wall_owner = row['wall_owner'] comment = Comments(id=id, post_id=post_id, author_id=author_id, text=text, lem_text=lem_text, likes=likes, date_time=date_time, weekday=weekday, wall_owner=wall_owner) try: db.session.add(comment) db.session.commit() db.session.refresh(comment) except: pass # authors author_base = [] author_stats = db.session.query(Authors.id).all() for item in author_stats: author_base.append(str(item[0])) for index, row in auth_df.iterrows(): if str(row['id']) not in author_base: id = row['id'] sex = row['sex'] bdate = row['bdate'] day = row['day'] month = row['month'] year = row['year'] city = row['city'] faculty = row['faculty'] books = row['books'] interests = row['interests'] home_town = row['home_town'] career = row['career'] author = Authors(id=id, sex=sex, bdate=bdate, day=day, month=month, year=year, city=city, faculty=faculty, books=books, interests=interests, home_town=home_town, career=career) db.session.add(author) db.session.commit() db.session.refresh(author) # db.session.commit() w = 'Информация скачана!' return render_template('search.html', owner_id=some_id, w=w) else: return redirect(url_for('answer_process'))
import functions " Reading an unsolved Sudoku from a CSV file called input.csv consisting of an unsolved Sudoku with 0’s representing blanks" Test = open("input.csv", 'r') text = Test.read() grid = text.replace(',', '') grid1 = grid.replace('\n', '') Test.close() # Solving Sudoku puzzle using Constraint Propagation algorithm output = functions.display(functions.search(functions.parse_grid(grid1))) # Convert the final result to CSV format j = 1 list1 = '' for i in output: if j % 9 != 0: list1 = list1 + i + "," elif j % 9 == 0: list1 = list1 + i + "\n" j += 1 # Saving the result in a CSV file called Result.csv out = open("Result.csv", 'w') out.write(list1) out.close()
def main(): op = 0 while op != 14: print('---' * 40) print('\t********Library Management System***********') print('---' * 40) print('\tpress 1 for Add Student') print('\tpress 2 for Add Faculty') print('\tpress 3 for Add Book') print('\tpress 4 for issue book for Student') print('\tpress 5 for issue book for faculty') print('\tpress 6 for search book') print('\tpress 7 for search student') print('\tpress 8 for search faculty') print('\tpress 9 for return book from Student') print('\tpress 10 for return book from faculty') print('\tpress 11 for display information of student') print('\tpress 12 for display information of faculty') print('\tpress 13 for display information of books') print('\tpress 14 for Exit') print('---' * 40) op = int(input('Enter choice :: ')) print('---' * 40) print( "[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[ Please carefully filled the entry ]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]" ) if (op == 1): f.add_student() elif (op == 2): f.add_faculty() elif (op == 3): f.add_books() elif (op == 4): f.book_issue() elif (op == 5): f.book_issue_fac() elif (op == 6): if os.path.exists('Books.pkl') is True: with open('Books.pkl', 'rb') as b: isbn = input('Enter book ISBN :: ').upper() f.search(b, isbn, 3) else: print('No any directory') elif (op == 7): if os.path.exists('Student.pkl') is True: with open('Student.pkl', 'rb') as b: stu = input('Enter Student ID :: ').upper() f.search(b, stu, 1) else: print('No any directory') elif (op == 8): if os.path.exists('Faulty.pkl') is True: with open('Faculty.pkl', 'rb') as b: fac = input('Enter Faculty ID :: ').upper() f.search(b, fac, 2) else: print('No any directory') elif (op == 9): f.return_student() elif (op == 10): f.return_faculty() elif (op == 11): f.display_student() elif (op == 12): f.display_faculty() elif (op == 13): f.display_books() elif (op == 14): print("*************** Thankyou ********************") break else: print("Invalid choice")
def main(): command_list = { '1. Stop': 'If you want to stop using our Phone book, you should use this command', '2. Add persons': 'You can add persons to Phone book', '3. Visualisation': 'You can see all elements of Phone book and some information about it', '4. Search': 'Searching over phone book by one or several fields', '5. Get the age of the person': 'You can get the actual age of the person', '6. Change data': 'You can change the information about existing person', '7. Get number': 'You can get the phone number of the person', '8. Delete person': 'You can delete a record about the person', '9. Compare age of persons': 'You can find out how many people are younger/older/equal than/to N years old.' 'You should enter the parameter N', '10. Clear screen': 'You can clear screen if you want', '11. Search by month and day': 'You can search records in Phone book by month and day at the same time', '12. Delete person by number': 'You can delete person by number' } phone_book = {} with open("surname_numbers.txt") as file: for line in file: key, *value = line.replace('\n', ':').split(':') phone_book[key] = value[:-1] while True: print('Welcome to our phone book!') func.visualisation_of_commands(command_list) print("Enter the number of the command, please:", end=' ') command = input() if command == '1': break elif command == '2': print('Enter your data about person.The format of data is ' + color.BOLD + 'Name Surname:Number:Date of birth(if you want)' + color.END) print('Date of birth format: XX/XX/XXXX') print('Example: Alex Bystrov:89100000000:01/04/1999', end=' ') data = input().replace('\n', ':').split(':') while len(data) < 2: print('Wrong, format. You should enter at least full name and phone number.') print('Example: Alex Bystov:89100000000:01/04/1999') print('Please, try again here:', end=' ') data = input().replace('\n', ':').split(':') name, *value = data if len(value) > 1: number = value[0] date = value[1] else: number = value[0] date = '' while func.number_check(number) == 0: print("Please try again, enter only number:", end=' ') number = input() while func.date_check(date) == 0: print("Please try again,enter only date:", end=' ') date = input() while func.name_check(name) == 0: print("Please try again,enter name and surname:", end=' ') name = input() name = name.title() if number[0] == '+' and number[1] == '7': number = number.replace('+7', '8', 1) func.add_persons(phone_book, name, number, date) elif command == '3': func.visualisation(phone_book) elif command == '4': print("Hy! Let's search for someone. You have 4 parameters to search \n" "Name Surname Number Date \n" "1. You can search by Name, you should enter: Name _ _ _ \n" "2. You can search by Surname, you should enter: _ Surname _ _\n" "3. You can search by Number, you should enter: _ _ Number _\n" "4. You can search by Birth Date, you should enter: _ _ _ Date\n" "5. You can search by Birth Date and Surname at the same time, you should enter: _ Surname _ Date\n" "6. You can search by Name and Surname at the same time, you should enter: Name Surname _ _\n") print('Example: Petr _ _ _') print('Enter your data:', end=' ') data = input().split() while len(data) != 4: print('Wrong, format. You should search by 4 parameters.') print('Example: Petr _ _ _ or _ Slavutin _ _ _') print('Please, try again here:', end=' ') data = input().split() ob1, ob2, ob3, ob4 = data[0], data[1], data[2], data[3] func.search(phone_book, ob1, ob2, ob3, ob4) elif command == '5': print('Choose the name and surname please.') print('Example: Alex Bystov', end=' ') name = input() while func.name_check(name) == 0: print("Please try again, enter name and surname:", end=' ') name = input() name = name.title() func.age_of_the_person(phone_book, name, 0) elif command == '6': print('What data do you want to change?') print('1. Name and surname') print('2. Number') print('3. Birth date') print('Enter the number of the command here:', end=' ') choice = input() if choice == '1': print('Choose the name and surname, which you want to change.') print('Example: Alex Bystov.') print('Enter your data here:', end=' ') name = input() while func.name_check(name) == 0: print("Please try again,enter name and surname:", end=' ') name = input() name = name.title() func.change_name(phone_book, name) if choice == '2': print('Choose the name and surname, which number you want to change.') print('Example: Alex Bystov.') print('Enter your data here:', end=' ') name = input() while func.name_check(name) == 0: print("Please try again,enter name and surname:", end=' ') name = input() name = name.title() func.change_number(phone_book, name) if choice == '3': print('Choose the name and surname, which birth date you want to change.') print('Example: Alex Bystov.') print('Enter your data here:', end=' ') name = input() while func.name_check(name) == 0: print("Please try again,enter name and surname:", end=' ') name = input() name = name.title() func.change_date(phone_book, name) elif command == '7': print('Choose the name and surname, which number you want to know please.') print('Example: Alex Bystov.') print('Enter your data here:', end=' ') name = input() while func.name_check(name) == 0: print("Please try again,enter name and surname:", end=' ') name = input() name = name.title() func.get_ph_number(phone_book, name) elif command == '8': print("Choose the name and surname, which you want to delete please.") print('Example: Alex Bystov.') print('Enter your data here:', end=' ') name = input() while func.name_check(name) == 0: print("Please try again,enter name and surname:", end=' ') name = input() name = name.title() func.del_person(phone_book, name) elif command == '9': print('Please enter the age with that you want compare persons age in Phone book (N):', end=' ') comparison_number = input() while func.comparison_number_check(comparison_number) == 0: print('Wrong input format, please try again:', end='') comparison_number = input() print('Do you want to watch people younger(enter 1), older(enter 2) ' 'or equal(enter 3) than {}'.format(comparison_number)) print('Enter data here', end=' ') parameter = input() while parameter != '1' and parameter != '2' and parameter != '3': print('Wrong input format, you should enter 1 or 2 or 3 ') print('Example: 1') print('Enter your parameter here:', end=' ') parameter = input() func.compare_by_age(phone_book, comparison_number, parameter) elif command == '10': func.cls() elif command == '11': print('Please enter the month and day. The format of input: Day/Month(XX/XX)') print('Example: 21/11') date = input('Enter date here:') while func.date_check_for_search(date) == 0: print("Please try again,enter only date:", end=' ') date = input() func.search_by_month_and_day(phone_book, date) elif command == '12': print('Please enter the number', end=' ') number = input() while func.number_check(number) == 0: print("Please try again, enter number:", end=' ') number = input() func.delete_person_by_number(phone_book, number) with open('surname_numbers.txt', 'w') as out: for key, value in phone_book.items(): out.write('{}:{}:{}\n'.format(key, *value)) print('Thank you for using our Phone book!') print('We hope, we will see you again! Good Lick!')
print(" ") print(" Sudoku game: Easy ") print(" ") ## Add this to be a random drawn board grid = '..3.2.6..9..3.5..1..18.64....81.29..7.......8..67.82....26.95..8..2.3..9..5.1.3..' display(grid_values_StartingBoard(grid)) print(" ") for i in solutionGame: print(i, ".", solutionGame[i]) solutionGame_choice = input( "Please type in the number of your choice: ") if solutionGame_choice == "1": display(eliminate(grid_values(grid))) printBoard = False elif solutionGame_choice == "2": display(search(grid_values(grid))) printBoard = False elif solutionGame_choice == "3": break elif solutionGame_choice == "4": exit() elif selc_levGame == "2": print("Option not defined yet") #fix this elif selc_levGame == "3": print("Option not defined yet") elif selc_levGame == "4": break elif selc_levGame == "5": exit() else:
noresult = 'Aucun résultat pour votre recherche' fr = '<a class = "selected" href="?lang=fr">Français</a>' en = '<a class = "" href="?lang=en">Anglais</a>' if ('word' in form): word = form["word"].value if lang == 'en': en = '<a class = "selected" href="?word=' + word + '&lang=en">English</a>' fr = '<a class = "" href="?word=' + word + '&lang=fr">French</a>' else: fr = '<a class = "selected" href="?word=' + word + '&lang=fr">Français</a>' en = '<a class = "" href="?word=' + word + '&lang=en">Anglais</a>' model = Word2Vec.load('./rss_models/model_' + lang) similatities = functions.getBestSimilarities(model, word, lang, 5) results = functions.search(word, lang) print(''' <!DOCTYPE html> <html lang="''' + lang + '''"> <head> <title>''' + title + " - " + word + '''</title> <link rel="stylesheet" href="../web/styles.css"> </head> <body> <div class="logo"> <img src="../web/images/logo.png" style="border: 0pt none ; width: 300px; height: 300px;"> </div> <form class="searchfield cf"> <input type="text" name="word" value="''' + word + '''"> <input type="hidden" name="lang" value="''' + lang + '''"> <button type="submit">''' + button + '''</button>
def onsearch(self, event): if self.search_text.GetValue() != '': try: fcn.search(self) except: self.SetStatusText('エラーが発生しました')
def DeleteAll(): memory = [] ram = [] company = [] TypeName = [] OpSys = [] companies = pd.read_csv('../Data/Companies.csv', encoding='latin1', index_col='Unnamed: 0') collaborations = pd.read_csv('../Data/Collaboration_Id.csv', encoding='latin1', index_col='Unnamed: 0') products = pd.read_csv('../Data/Products.csv', encoding='latin1', index_col='Unnamed: 0') notebooks = ft.df_to_dict(companies, collaborations, products) if (cvar1.get() == 1): memory.append("Intel Core i3") if (cvar2.get() == 1): memory.append("Intel Core i5") if (cvar3.get() == 1): memory.append("Intel Core i7") if (cvar4.get() == 1): memory.append("Intel Core M") if (cvar5.get() == 1): memory.append("AMD A9-Series") if (cvar6.get() == 1): memory.append("AMD E-Series") if (cvar7.get() == 1): memory.append("AMD A6-Series") if (cvar8.get() == 1): memory.append("Intel Celeron") if (cvar9.get() == 1): memory.append("AMD Ryzen") if (cvar10.get() == 1): memory.append("Intel Pentium") if (cvar11.get() == 1): memory.append("AMD FX") if (cvar12.get() == 1): memory.append("Intel Xeon") if (cvar13.get() == 1): memory.append("AMD A10-Series") if (cvar14.get() == 1): memory.append("AMD A8-Series") if (cvar15.get() == 1): memory.append("AMD A12-Series") if (cvar16.get() == 1): memory.append("AMD A4-Series") if (cvar17.get() == 1): memory.append("Samsung Cortex") if (cvar18.get() == 1): memory.append("Intel Core M") if (svar1.get() == 1): ram.append('2GB') if (svar2.get() == 1): ram.append('4GB') if (svar3.get() == 1): ram.append('6GB') if (svar4.get() == 1): ram.append('8GB') if (svar5.get() == 1): ram.append('12GB') if (svar6.get() == 1): ram.append('16GB') if (svar7.get() == 1): ram.append('32GB') if (svar8.get() == 1): ram.append('24GB') if (svar9.get() == 1): ram.append('64GB') if (tvar1.get() == 1): TypeName.append('Ultrabook') if (tvar2.get() == 1): TypeName.append('Notebook') if (tvar3.get() == 1): TypeName.append('Netbook') if (tvar4.get() == 1): TypeName.append('Gaming') if (tvar5.get() == 1): TypeName.append('2 in 1 Convertible') if (tvar6.get() == 1): TypeName.append("Workstation") if (covar1.get() == 1): company.append('Apple') if (covar2.get() == 1): company.append('HP') if (covar3.get() == 1): company.append('Acer') if (covar4.get() == 1): company.append('Asus') if (covar5.get() == 1): company.append('Dell') if (covar6.get() == 1): company.append('Lenovo') if (covar7.get() == 1): company.append('Chuwi') if (covar8.get() == 1): company.append('MSI') if (covar9.get() == 1): company.append('Microsoft') if (covar10.get() == 1): company.append('Toshiba') if (covar11.get() == 1): company.append('Huawei') if (covar12.get() == 1): company.append('Xiaomi') if (covar13.get() == 1): company.append('Vero') if (covar14.get() == 1): company.append('Razer') if (covar15.get() == 1): company.append('Mediacom') if (covar16.get() == 1): company.append('Samsung') if (covar17.get() == 1): company.append('Google') if (covar18.get() == 1): company.append('Fujitsu') if (covar19.get() == 1): company.append('LG') if (osvar1.get() == 1): OpSys.append("macOS") if (osvar2.get() == 1): OpSys.append("No OS") if (osvar3.get() == 1): OpSys.append("Windows 10") if (osvar4.get() == 1): OpSys.append("Mac OS X") if (osvar5.get() == 1): OpSys.append("Linux") if (osvar6.get() == 1): OpSys.append("Android") if (osvar7.get() == 1): OpSys.append("Windows 10 S") if (osvar8.get() == 1): OpSys.append("Chrome OS") if (osvar9.get() == 1): OpSys.append("Windows 7") if (higher_price.get() or lowest_price.get()): pricelst = [] pricelst.append(0.0) if (lowest_price.get()): pricelst.append(float(lowest_price.get())) else: pricelst.append(0.0) if (higher_price.get()): pricelst.append(float(higher_price.get())) else: pricelst.append(9999999.9) notebooks = ft.filter_by_price(notebooks, pricelst) if (OpSys): notebooks = ft.filter_by_specification(notebooks, 'OpSys', OpSys) if (company): notebooks = ft.filter_by_specification(notebooks, 'Company', company) if (TypeName): notebooks = ft.filter_by_specification(notebooks, 'TypeName', TypeName) if (ram): notebooks = ft.filter_by_specification(notebooks, "Ram", ram) if (memory): notebooks = ft.filter_by_cpu(notebooks, memory) if (searchEntry.get() != "" and searchEntry.get() != "Search"): # print(notebooks) notebooks = ft.search(searchEntry.get(), notebooks) print(searchEntry.get()) if (ram == [] and memory == [] and TypeName == [] and company == [] and OpSys == [] and higher_price.get() == '' and lowest_price.get() == '' and searchEntry.get() == '' and searchEntry == "Search"): notebooks = ft.df_to_dict(companies, collaborations, products) lbox.delete(0, END) if (searchEntry.get() == "" or searchEntry.get() == "Search"): for i in ft.strings(notebooks): lbox.insert(0, i) lbox.insert(0, '') else: for i in notebooks.values(): lbox.insert(0, i) lbox.insert(0, '')
input( "\nIntroduce an id for this password (Number between 1 and 1000): " )) service = input("Introduce the service (Facebook, Gmail...): ") user = input("Introduce your user: "******"Introduce your password: "******"2": functions.showAll() elif opcionMenu == "3": search_service = input( "\nOk, introduce the name of the service : ") functions.search(search_service) elif opcionMenu == "4": id_edit = int( input( "\nIntroduce the id of the user/password you want to edit : " )) user_edit = input("Introduce the new username : "******"Introduce the new password : "******"5": id_remove = int( input( "\nOk, introduce the id of the password you want to remove : " ))
def search(): functions.search('all', '', '')
import functions as f A = f.gerarLista() f.search(A)
def main(): #GENERAL torch.cuda.empty_cache() root = "/home/kuru/Desktop/veri-gms-master_noise/" train_dir = '/home/kuru/Desktop/veri-gms-master_noise/VeRispan/image_train/' source = {'verispan'} target = {'verispan'} workers = 4 height = 280 width = 280 train_size = 32 train_sampler = 'RandomSampler' #AUGMENTATION random_erase = True jitter = True aug = True #OPTIMIZATION opt = 'adam' lr = 0.0003 weight_decay = 5e-4 momentum = 0.9 sgd_damp = 0.0 nesterov = True warmup_factor = 0.01 warmup_method = 'linear' #HYPERPARAMETER max_epoch = 80 start = 0 train_batch_size = 8 test_batch_size = 100 #SCHEDULER lr_scheduler = 'multi_step' stepsize = [30, 60] gamma = 0.1 #LOSS margin = 0.3 num_instances = 4 lambda_tri = 1 #MODEL #arch = 'resnet101' arch='resnet101_ibn_a' no_pretrained = False #TEST SETTINGS load_weights = '/home/kuru/Desktop/veri-gms-master/IBN-Net_pytorch0.4.1/resnet101_ibn_a.pth' #load_weights = None start_eval = 0 eval_freq = -1 #MISC use_gpu = True print_freq = 10 seed = 1 resume = '' save_dir = '/home/kuru/Desktop/veri-gms-master_noise/spanningtree_verinoise_101_stride2/' gpu_id = 0,1 vis_rank = True query_remove = True evaluate = False dataset_kwargs = { 'source_names': source, 'target_names': target, 'root': root, 'height': height, 'width': width, 'train_batch_size': train_batch_size, 'test_batch_size': test_batch_size, 'train_sampler': train_sampler, 'random_erase': random_erase, 'color_jitter': jitter, 'color_aug': aug } transform_kwargs = { 'height': height, 'width': width, 'random_erase': random_erase, 'color_jitter': jitter, 'color_aug': aug } optimizer_kwargs = { 'optim': opt, 'lr': lr, 'weight_decay': weight_decay, 'momentum': momentum, 'sgd_dampening': sgd_damp, 'sgd_nesterov': nesterov } lr_scheduler_kwargs = { 'lr_scheduler': lr_scheduler, 'stepsize': stepsize, 'gamma': gamma } use_gpu = torch.cuda.is_available() log_name = 'log_test.txt' if evaluate else 'log_train.txt' sys.stdout = Logger(osp.join(save_dir, log_name)) print('Currently using GPU ', gpu_id) cudnn.benchmark = True print('Initializing image data manager') dataset = init_imgreid_dataset(root='/home/kuru/Desktop/veri-gms-master_noise/', name='verispan') train = [] num_train_pids = 0 num_train_cams = 0 print(len( dataset.train)) for img_path, pid, camid, subid, countid in dataset.train: #print(img_path) path = img_path[56+6:90+6] #print(path) folder = path[1:4] #print(folder) pid += num_train_pids newidd=0 train.append((path, folder, pid, camid,subid,countid)) num_train_pids += dataset.num_train_pids num_train_cams += dataset.num_train_cams pid = 0 pidx = {} for img_path, pid, camid, subid, countid in dataset.train: path = img_path[56+6:90+6] folder = path[1:4] pidx[folder] = pid pid+= 1 sub=[] final=0 xx=dataset.train newids=[] print(train[0:2]) train2={} for k in range(0,770): for img_path, pid, camid, subid, countid in dataset.train: if k==pid: newid=final+subid sub.append(newid) #print(pid,subid,newid) newids.append(newid) train2[img_path]= newid #print(img_path, pid, camid, subid, countid, newid) final=max(sub) #print(final) print(len(newids),final) #train=train2 #print(train2) train3=[] for img_path, pid, camid, subid, countid in dataset.train: #print(img_path,pid,train2[img_path]) path = img_path[56:90+6] #print(path) folder = path[1:4] newid=train2[img_path] #print((path, folder, pid, camid, subid, countid,newid )) train3.append((path, folder, pid, camid, subid, countid,newid )) train = train3 path = '/home/kuru/Desktop/adhi/veri-final-draft-master_noise/gmsNoise776/' pkl = {} #pkl[0] = pickle.load('/home/kuru/Desktop/veri-gms-master/gms/620.pkl') entries = os.listdir(path) for name in entries: f = open((path+name), 'rb') ccc=(path+name) #print(ccc) if name=='featureMatrix.pkl': s = name[0:13] else: s = name[0:3] #print(s) #with open (ccc,"rb") as ff: # pkl[s] = pickle.load(ff) #print(pkl[s]) pkl[s] = pickle.load(f) f.close #print(len(pkl)) with open('cids.pkl', 'rb') as handle: b = pickle.load(handle) #print(b) with open('index.pkl', 'rb') as handle: c = pickle.load(handle) transform_t = train_transforms(**transform_kwargs) data_tfr = vdspan(pkl_file='index_veryspan_noise.pkl', dataset = train, root_dir='/home/kuru/Desktop/veri-gms-master_noise/VeRispan/image_train/', transform=transform_t) print("lllllllllllllllllllllllllllllllllllllllllllline 433") df2=[] data_tfr_old=data_tfr for (img,label,index,pid, cid,subid,countid,newid) in data_tfr : #print((img,label,index,pid, cid,subid,countid,newid) ) #print("datframe",(label)) #print(countid) if countid > 4 : #print(countid) df2.append((img,label,index,pid, cid,subid,countid,newid)) print("filtered final trainset length",len(df2)) data_tfr=df2 trainloader = DataLoader(data_tfr, sampler=None,batch_size=train_batch_size, shuffle=True, num_workers=workers,pin_memory=True, drop_last=True) #data_tfr = vd(pkl_file='index.pkl', dataset = train, root_dir=train_dir,transform=transforms.Compose([Rescale(64),RandomCrop(32),ToTensor()])) #dataloader = DataLoader(data_tfr, batch_size=batch_size, shuffle=False, num_workers=0) for batch_idx, (img,label,index,pid, cid,subid,countid,newid) in enumerate(trainloader): #print("trainloader",batch_idx, (label,index,pid, cid,subid,countid,newid)) print("trainloader",batch_idx, (label)) break print('Initializing test data manager') dm = ImageDataManager(use_gpu, **dataset_kwargs) testloader_dict = dm.return_dataloaders() print('Initializing model: {}'.format(arch)) model = models.init_model(name=arch, num_classes=num_train_pids, loss={'xent', 'htri'}, pretrained=not no_pretrained, last_stride =2 ) print('Model size: {:.3f} M'.format(count_num_param(model))) if load_weights is not None: print("weights loaded") load_pretrained_weights(model, load_weights) print(torch.cuda.device_count()) model = nn.DataParallel(model).cuda() if use_gpu else model optimizer = init_optimizer(model, **optimizer_kwargs) #optimizer = init_optimizer(model) scheduler = init_lr_scheduler(optimizer, **lr_scheduler_kwargs) criterion_xent = CrossEntropyLoss(num_classes=num_train_pids, use_gpu=use_gpu, label_smooth=True) criterion_htri = TripletLoss(margin=margin) ranking_loss = nn.MarginRankingLoss(margin = margin) if evaluate: print('Evaluate only') for name in target: print('Evaluating {} ...'.format(name)) queryloader = testloader_dict[name]['query'] galleryloader = testloader_dict[name]['gallery'] _, distmat = test(model, queryloader, galleryloader, train_batch_size, use_gpu, return_distmat=True) if vis_rank: visualize_ranked_results( distmat, dm.return_testdataset_by_name(name), save_dir=osp.join(save_dir, 'ranked_results', name), topk=20 ) return time_start = time.time() ranklogger = RankLogger(source, target) print('=> Start training') data_index = search(pkl) print(len(data_index)) for epoch in range(start, max_epoch): losses = AverageMeter() #xent_losses = AverageMeter() htri_losses = AverageMeter() accs = AverageMeter() batch_time = AverageMeter() xent_losses=AverageMeter() model.train() for p in model.parameters(): p.requires_grad = True # open all layers end = time.time() for batch_idx, (img,label,index,pid, cid,subid,countid,newid) in enumerate(trainloader): trainX, trainY = torch.zeros((train_batch_size*3,3,height, width), dtype=torch.float32), torch.zeros((train_batch_size*3), dtype = torch.int64) #pids = torch.zeros((batch_size*3), dtype = torch.int16) for i in range(train_batch_size): #print("dfdsfs") labelx = label[i] indexx = index[i] cidx = pid[i] if indexx >len(pkl[labelx])-1: indexx = len(pkl[labelx])-1 #maxx = np.argmax(pkl[labelx][indexx]) a = pkl[labelx][indexx] minpos = np.argmin(ma.masked_where(a==0, a)) #print(minpos) #print(np.array(data_index).shape) #print(data_index[cidx][1]) pos_dic = data_tfr_old[data_index[cidx][1]+minpos] neg_label = int(labelx) while True: neg_label = random.choice(range(1, 770)) #print(neg_label) if neg_label is not int(labelx) and os.path.isdir(os.path.join('/home/kuru/Desktop/adiusb/veri-split/train', strint(neg_label))) is True: break negative_label = strint(neg_label) neg_cid = pidx[negative_label] neg_index = random.choice(range(0, len(pkl[negative_label]))) neg_dic = data_tfr_old[data_index[neg_cid][1]+neg_index] trainX[i] = img[i] trainX[i+train_batch_size] = pos_dic[0] trainX[i+(train_batch_size*2)] = neg_dic[0] trainY[i] = cidx trainY[i+train_batch_size] = pos_dic[3] trainY[i+(train_batch_size*2)] = neg_dic[3] trainX = trainX.cuda() trainY = trainY.cuda() outputs, features = model(trainX) xent_loss = criterion_xent(outputs[0:train_batch_size], trainY[0:train_batch_size]) htri_loss = criterion_htri(features, trainY) #tri_loss = ranking_loss(features) #ent_loss = xent_loss(outputs[0:batch_size], trainY[0:batch_size], num_train_pids) loss = htri_loss+xent_loss optimizer.zero_grad() loss.backward() optimizer.step() batch_time.update(time.time() - end) losses.update(loss.item(), trainY.size(0)) htri_losses.update(htri_loss.item(), trainY.size(0)) xent_losses.update(xent_loss.item(), trainY.size(0)) accs.update(accuracy(outputs[0:train_batch_size], trainY[0:train_batch_size])[0]) if (batch_idx) % 50 == 0: print('Train ', end=" ") print('Epoch: [{0}][{1}/{2}]\t' 'Time {batch_time.val:.3f} ({batch_time.avg:.3f})\t' 'TriLoss {loss.val:.4f} ({loss.avg:.4f})\t' 'XLoss {xloss.val:.4f} ({xloss.avg:.4f})\t' 'OveralLoss {oloss.val:.4f} ({oloss.avg:.4f})\t' 'Acc {acc.val:.2f} ({acc.avg:.2f})\t' 'lr {lrrr} \t'.format( epoch + 1, batch_idx + 1, len(trainloader), batch_time=batch_time, loss = htri_losses, xloss = xent_losses, oloss = losses, acc=accs , lrrr=lrrr, )) end = time.time() scheduler.step() print('=> Test') for name in target: print('Evaluating {} ...'.format(name)) queryloader = testloader_dict[name]['query'] galleryloader = testloader_dict[name]['gallery'] rank1, distmat = test(model, queryloader, galleryloader, test_batch_size, use_gpu) ranklogger.write(name, epoch + 1, rank1) rank2, distmat2 = test_rerank(model, queryloader, galleryloader, test_batch_size, use_gpu) ranklogger.write(name, epoch + 1, rank2) #if (epoch + 1) == max_epoch: if (epoch + 1) % 2 == 0: print('=> Test') for name in target: print('Evaluating {} ...'.format(name)) queryloader = testloader_dict[name]['query'] galleryloader = testloader_dict[name]['gallery'] rank1, distmat = test(model, queryloader, galleryloader, test_batch_size, use_gpu) ranklogger.write(name, epoch + 1, rank1) # if vis_rank: # visualize_ranked_results( # distmat, dm.return_testdataset_by_name(name), # save_dir=osp.join(save_dir, 'ranked_results', name), # topk=20) save_checkpoint({ 'state_dict': model.state_dict(), 'rank1': rank1, 'epoch': epoch + 1, 'arch': arch, 'optimizer': optimizer.state_dict(), }, save_dir)
def main(): torch.backends.cudnn.deterministic = True cudnn.benchmark = True #parser = argparse.ArgumentParser(description="ReID Baseline Training") #parser.add_argument( #"--config_file", default="", help="path to config file", type=str) #parser.add_argument("opts", help="Modify config options using the command-line", default=None, nargs=argparse.REMAINDER) #args = parser.parse_args() config_file = 'configs/baseline_veri_r101_a.yml' if config_file != "": cfg.merge_from_file(config_file) #cfg.merge_from_list(args.opts) cfg.freeze() output_dir = cfg.OUTPUT_DIR if output_dir and not os.path.exists(output_dir): os.makedirs(output_dir) logger = setup_logger("reid_baseline", output_dir, if_train=True) logger.info("Saving model in the path :{}".format(cfg.OUTPUT_DIR)) logger.info(config_file) if config_file != "": logger.info("Loaded configuration file {}".format(config_file)) with open(config_file, 'r') as cf: config_str = "\n" + cf.read() logger.info(config_str) logger.info("Running with config:\n{}".format(cfg)) os.environ['CUDA_VISIBLE_DEVICES'] = cfg.MODEL.DEVICE_ID path = 'D:/Python_SMU/Veri/verigms/gms/' pkl = {} entries = os.listdir(path) for name in entries: f = open((path + name), 'rb') if name == 'featureMatrix.pkl': s = name[0:13] else: s = name[0:3] pkl[s] = pickle.load(f) f.close with open('cids.pkl', 'rb') as handle: b = pickle.load(handle) with open('index.pkl', 'rb') as handle: c = pickle.load(handle) train_transforms, val_transforms, dataset, train_set, val_set = make_dataset( cfg, pkl_file='index.pkl') num_workers = cfg.DATALOADER.NUM_WORKERS num_classes = dataset.num_train_pids #pkl_f = 'index.pkl' pid = 0 pidx = {} for img_path, pid, _, _ in dataset.train: path = img_path.split('\\')[-1] folder = path[1:4] pidx[folder] = pid pid += 1 if 'triplet' in cfg.DATALOADER.SAMPLER: train_loader = DataLoader(train_set, batch_size=cfg.SOLVER.IMS_PER_BATCH, sampler=RandomIdentitySampler( dataset.train, cfg.SOLVER.IMS_PER_BATCH, cfg.DATALOADER.NUM_INSTANCE), num_workers=num_workers, pin_memory=True, collate_fn=train_collate_fn) elif cfg.DATALOADER.SAMPLER == 'softmax': print('using softmax sampler') train_loader = DataLoader(train_set, batch_size=cfg.SOLVER.IMS_PER_BATCH, shuffle=True, num_workers=num_workers, pin_memory=True, collate_fn=train_collate_fn) else: print('unsupported sampler! expected softmax or triplet but got {}'. format(cfg.SAMPLER)) print("train loader loaded successfully") val_loader = DataLoader(val_set, batch_size=cfg.TEST.IMS_PER_BATCH, shuffle=False, num_workers=num_workers, pin_memory=True, collate_fn=train_collate_fn) print("val loader loaded successfully") if cfg.MODEL.PRETRAIN_CHOICE == 'finetune': model = make_model(cfg, num_class=576) model.load_param_finetune(cfg.MODEL.PRETRAIN_PATH) print('Loading pretrained model for finetuning......') else: model = make_model(cfg, num_class=num_classes) loss_func, center_criterion = make_loss(cfg, num_classes=num_classes) optimizer, optimizer_center = make_optimizer(cfg, model, center_criterion) scheduler = WarmupMultiStepLR(optimizer, cfg.SOLVER.STEPS, cfg.SOLVER.GAMMA, cfg.SOLVER.WARMUP_FACTOR, cfg.SOLVER.WARMUP_EPOCHS, cfg.SOLVER.WARMUP_METHOD) print("model,optimizer, loss, scheduler loaded successfully") height, width = cfg.INPUT.SIZE_TRAIN log_period = cfg.SOLVER.LOG_PERIOD checkpoint_period = cfg.SOLVER.CHECKPOINT_PERIOD eval_period = cfg.SOLVER.EVAL_PERIOD device = "cuda" epochs = cfg.SOLVER.MAX_EPOCHS logger = logging.getLogger("reid_baseline.train") logger.info('start training') if device: if torch.cuda.device_count() > 1: print('Using {} GPUs for training'.format( torch.cuda.device_count())) model = nn.DataParallel(model) model.to(device) loss_meter = AverageMeter() acc_meter = AverageMeter() evaluator = R1_mAP_eval(len(dataset.query), max_rank=50, feat_norm=cfg.TEST.FEAT_NORM) model.base._freeze_stages() logger.info('Freezing the stages number:{}'.format(cfg.MODEL.FROZEN)) data_index = search(pkl) print("Ready for training") for epoch in range(1, epochs + 1): start_time = time.time() loss_meter.reset() acc_meter.reset() evaluator.reset() scheduler.step() model.train() for n_iter, (img, label, index, pid, cid) in enumerate(train_loader): optimizer.zero_grad() optimizer_center.zero_grad() #img = img.to(device) #target = vid.to(device) trainX, trainY = torch.zeros( (train_loader.batch_size * 3, 3, height, width), dtype=torch.float32), torch.zeros( (train_loader.batch_size * 3), dtype=torch.int64) for i in range(train_loader.batch_size): labelx = label[i] indexx = index[i] cidx = pid[i] if indexx > len(pkl[labelx]) - 1: indexx = len(pkl[labelx]) - 1 a = pkl[labelx][indexx] minpos = np.argmin(ma.masked_where(a == 0, a)) pos_dic = train_set[data_index[cidx][1] + minpos] #print(pos_dic[1]) neg_label = int(labelx) while True: neg_label = random.choice(range(1, 770)) if neg_label is not int(labelx) and os.path.isdir( os.path.join('D:/datasets/veri-split/train', strint(neg_label))) is True: break negative_label = strint(neg_label) neg_cid = pidx[negative_label] neg_index = random.choice(range(0, len(pkl[negative_label]))) neg_dic = train_set[data_index[neg_cid][1] + neg_index] trainX[i] = img[i] trainX[i + train_loader.batch_size] = pos_dic[0] trainX[i + (train_loader.batch_size * 2)] = neg_dic[0] trainY[i] = cidx trainY[i + train_loader.batch_size] = pos_dic[3] trainY[i + (train_loader.batch_size * 2)] = neg_dic[3] #print(trainY) trainX = trainX.cuda() trainY = trainY.cuda() score, feat = model(trainX, trainY) loss = loss_func(score, feat, trainY) loss.backward() optimizer.step() if 'center' in cfg.MODEL.METRIC_LOSS_TYPE: for param in center_criterion.parameters(): param.grad.data *= (1. / cfg.SOLVER.CENTER_LOSS_WEIGHT) optimizer_center.step() acc = (score.max(1)[1] == trainY).float().mean() loss_meter.update(loss.item(), img.shape[0]) acc_meter.update(acc, 1) if (n_iter + 1) % log_period == 0: logger.info( "Epoch[{}] Iteration[{}/{}] Loss: {:.3f}, Acc: {:.3f}, Base Lr: {:.2e}" .format(epoch, (n_iter + 1), len(train_loader), loss_meter.avg, acc_meter.avg, scheduler.get_lr()[0])) end_time = time.time() time_per_batch = (end_time - start_time) / (n_iter + 1) logger.info( "Epoch {} done. Time per batch: {:.3f}[s] Speed: {:.1f}[samples/s]" .format(epoch, time_per_batch, train_loader.batch_size / time_per_batch)) if epoch % checkpoint_period == 0: torch.save( model.state_dict(), os.path.join(cfg.OUTPUT_DIR, cfg.MODEL.NAME + '_{}.pth'.format(epoch))) if epoch % eval_period == 0: model.eval() for n_iter, (img, vid, camid, _, _) in enumerate(val_loader): with torch.no_grad(): img = img.to(device) feat = model(img) evaluator.update((feat, vid, camid)) cmc, mAP, _, _, _, _, _ = evaluator.compute() logger.info("Validation Results - Epoch: {}".format(epoch)) logger.info("mAP: {:.1%}".format(mAP)) for r in [1, 5, 10]: logger.info("CMC curve, Rank-{:<3}:{:.1%}".format( r, cmc[r - 1]))
def main(): #GENERAL torch.cuda.empty_cache() root = "/home/kuru/Desktop/veri-gms-master/" train_dir = '/home/kuru/Desktop/veri-gms-master/VeRispan/image_train/' source = {'verispan'} target = {'verispan'} workers = 4 height = 320 width = 320 train_sampler = 'RandomSampler' #AUGMENTATION random_erase = True jitter = True aug = True #OPTIMIZATION opt = 'adam' lr = 0.001 weight_decay = 5e-4 momentum = 0.9 sgd_damp = 0.0 nesterov = True warmup_factor = 0.01 warmup_method = 'linear' STEPS = (30, 60) GAMMA = 0.1 WARMUP_FACTOR = 0.01 WARMUP_EPOCHS = 10 WARMUP_METHOD = 'linear' #HYPERPARAMETER max_epoch = 80 start = 0 train_batch_size = 16 test_batch_size = 50 #SCHEDULER lr_scheduler = 'multi_step' stepsize = [30, 60] gamma = 0.1 #LOSS margin = 0.3 num_instances = 4 lambda_tri = 1 #MODEL #arch = 'resnet101' arch = 'resnet50_ibn_a' no_pretrained = False #TEST SETTINGS #load_weights = '/home/kuru/Desktop/veri-gms-master/IBN-Net_pytorch0.4.1/resnet101_ibn_a.pth' #load_weights = '/home/kuru/Desktop/veri-gms-master/IBN-Net_pytorch0.4.1/resnet101_ibn_a.pth' load_weights = '/home/kuru/Desktop/veri-gms-master/IBN-Net_pytorch0.4.1/resnet50_ibn_a.pth' #load_weights = None start_eval = 0 eval_freq = -1 num_classes = 776 feat_dim = 2048 CENTER_LR = 0.5 CENTER_LOSS_WEIGHT = 0.0005 center_criterion = CenterLoss(num_classes=num_classes, feat_dim=feat_dim, use_gpu=True) optimizer_center = torch.optim.SGD(center_criterion.parameters(), lr=CENTER_LR) #MISC use_gpu = True #use_gpu = False print_freq = 10 seed = 1 resume = '' save_dir = '/home/kuru/Desktop/veri-gms-master_noise/spanningtree_veri_pure/' gpu_id = 0, 1 vis_rank = True query_remove = True evaluate = False dataset_kwargs = { 'source_names': source, 'target_names': target, 'root': root, 'height': height, 'width': width, 'train_batch_size': train_batch_size, 'test_batch_size': test_batch_size, 'train_sampler': train_sampler, 'random_erase': random_erase, 'color_jitter': jitter, 'color_aug': aug } transform_kwargs = { 'height': height, 'width': width, 'random_erase': random_erase, 'color_jitter': jitter, 'color_aug': aug } optimizer_kwargs = { 'optim': opt, 'lr': lr, 'weight_decay': weight_decay, 'momentum': momentum, 'sgd_dampening': sgd_damp, 'sgd_nesterov': nesterov } lr_scheduler_kwargs = { 'lr_scheduler': lr_scheduler, 'stepsize': stepsize, 'gamma': gamma } use_gpu = torch.cuda.is_available() log_name = 'log_test.txt' if evaluate else 'log_train.txt' sys.stdout = Logger(osp.join(save_dir, log_name)) print('Currently using GPU ', gpu_id) cudnn.benchmark = True print('Initializing image data manager') #dataset = init_imgreid_dataset(root='/home/kuru/Desktop/veri-gms-master/', name='veri') dataset = init_imgreid_dataset(root='/home/kuru/Desktop/veri-gms-master/', name='verispan') train = [] num_train_pids = 0 num_train_cams = 0 print(len(dataset.train)) for img_path, pid, camid, subid, countid in dataset.train: #print(img_path) path = img_path[56:90 + 6] #print(path) folder = path[1:4] #print(folder) #print(img_path, pid, camid,subid,countid) pid += num_train_pids camid += num_train_cams newidd = 0 train.append((path, folder, pid, camid, subid, countid)) #print(train) #break num_train_pids += dataset.num_train_pids num_train_cams += dataset.num_train_cams pid = 0 pidx = {} for img_path, pid, camid, subid, countid in dataset.train: path = img_path[56:90 + 6] folder = path[1:4] pidx[folder] = pid pid += 1 #print(pidx) sub = [] final = 0 xx = dataset.train newids = [] print(train[0:2]) train2 = {} for k in range(0, 770): for img_path, pid, camid, subid, countid in dataset.train: if k == pid: newid = final + subid sub.append(newid) #print(pid,subid,newid) newids.append(newid) train2[img_path] = newid #print(img_path, pid, camid, subid, countid, newid) final = max(sub) #print(final) print(len(newids), final) #train=train2 #print(train2) train3 = [] for img_path, pid, camid, subid, countid in dataset.train: #print(img_path,pid,train2[img_path]) path = img_path[56:90 + 6] #print(path) folder = path[1:4] newid = train2[img_path] #print((path, folder, pid, camid, subid, countid,newid )) train3.append((path, folder, pid, camid, subid, countid, newid)) train = train3 # for (path, folder, pid, camid, subid, countid,newid) in train: # print(path, folder) #path = '/home/kuru/Desktop/adhi/veri-final-draft-master_noise/gmsNoise776/' path = '/home/kuru/Desktop/veri-gms-master/gms/' pkl = {} #pkl[0] = pickle.load('/home/kuru/Desktop/veri-gms-master/gms/620.pkl') entries = os.listdir(path) for name in entries: f = open((path + name), 'rb') ccc = (path + name) #print(ccc) if name == 'featureMatrix.pkl': s = name[0:13] else: s = name[0:3] #print(s) #with open (ccc,"rb") as ff: # pkl[s] = pickle.load(ff) #print(pkl[s]) pkl[s] = pickle.load(f) f.close #print(len(pkl)) print('=> pickle indexing') data_index = search(pkl) print(len(data_index)) transform_t = train_transforms(**transform_kwargs) #print(train[0],train[10]) #data_tfr = vd(pkl_file='index.pkl', dataset = train, root_dir='/home/kuru/Desktop/veri-gms-master/VeRi/image_train/', transform=transform_t) data_tfr = vdspan( pkl_file='index_veryspan.pkl', dataset=train, root_dir='/home/kuru/Desktop/veri-gms-master/VeRispan/image_train/', transform=transform_t) #print(data_tfr) #print(trainloader) #data_tfr2=list(data_tfr) print("lllllllllllllllllllllllllllllllllllllllllllline 433") df2 = [] data_tfr_old = data_tfr for (img, label, index, pid, cid, subid, countid, newid) in data_tfr: #print((img,label,index,pid, cid,subid,countid,newid) ) #print("datframe",(label)) #print(countid) if countid > 4: #print(countid) df2.append((img, label, index, pid, cid, subid, countid, newid)) print("filtered final trainset length", len(df2)) data_tfr = df2 # with open('df2noise_ex.pkl', 'wb') as handle: # b = pickle.dump(df2, handle, protocol=pickle.HIGHEST_PROTOCOL) # with open('df2noise.pkl', 'rb') as handle: # df2 = pickle.load(handle) # data_tfr=df2 # for (img,label,index,pid, cid,subid,countid,newid) in data_tfr : # print("datframe",(label)) #data_tfr = vdspansort( dataset = train, root_dir='/home/kuru/Desktop/veri-gms-master_noise/VeRispan/image_train/', transform=transform_t) #trainloader = DataLoader(df2, sampler=None,batch_size=train_batch_size, shuffle=True, num_workers=workers,pin_memory=True, drop_last=True) trainloader = DataLoader(data_tfr, sampler=None, batch_size=train_batch_size, shuffle=True, num_workers=workers, pin_memory=True, drop_last=True) for batch_idx, (img, label, index, pid, cid, subid, countid, newid) in enumerate(trainloader): #print("trainloader",batch_idx, (label,index,pid, cid,subid,countid,newid)) print("trainloader", batch_idx, (label)) break print('Initializing test data manager') dm = ImageDataManager(use_gpu, **dataset_kwargs) testloader_dict = dm.return_dataloaders() print('Initializing model: {}'.format(arch)) model = models.init_model(name=arch, num_classes=num_train_pids, loss={'xent', 'htri'}, pretrained=not no_pretrained, last_stride=2) print('Model size: {:.3f} M'.format(count_num_param(model))) if load_weights is not None: print("weights loaded") load_pretrained_weights(model, load_weights) #checkpoint = torch.load('/home/kuru/Desktop/veri-gms-master/logg/model.pth.tar-19') #model._load_from_state_dict(checkpoint['state_dict']) #model.load_state_dict(checkpoint['state_dict']) #optimizer.load_state_dict(checkpoint['optimizer']) #print(checkpoint['epoch']) #print(checkpoint['rank1']) os.environ['CUDA_VISIBLE_DEVICES'] = '0' print(torch.cuda.device_count()) model = nn.DataParallel(model).cuda() if use_gpu else model optimizer = init_optimizer(model, **optimizer_kwargs) #optimizer = init_optimizer(model) #optimizer.load_state_dict(checkpoint['optimizer']) scheduler = init_lr_scheduler(optimizer, **lr_scheduler_kwargs) # scheduler = WarmupMultiStepLR(optimizer, STEPS, GAMMA, # WARMUP_FACTOR, # WARMUP_EPOCHS, WARMUP_METHOD) criterion_xent = CrossEntropyLoss(num_classes=num_train_pids, use_gpu=use_gpu, label_smooth=True) criterion_htri = TripletLoss(margin=margin) ranking_loss = nn.MarginRankingLoss(margin=margin) if evaluate: print('Evaluate only') for name in target: print('Evaluating {} ...'.format(name)) queryloader = testloader_dict[name]['query'] galleryloader = testloader_dict[name]['gallery'] _, distmat = test(model, queryloader, galleryloader, train_batch_size, use_gpu, return_distmat=True) if vis_rank: visualize_ranked_results(distmat, dm.return_testdataset_by_name(name), save_dir=osp.join( save_dir, 'ranked_results', name), topk=20) return time_start = time.time() ranklogger = RankLogger(source, target) # # checkpoint = torch.load('/home/kuru/Desktop/market_all/ibna_model/model.pth.tar-79') # # model.load_state_dict(checkpoint['state_dict']) # # optimizer.load_state_dict(checkpoint['optimizer']) # # print(checkpoint['epoch']) # # start_epoch=checkpoint['epoch'] # # start=start_epoch # checkpoint = torch.load('/home/kuru/Desktop/veri-gms-master/spanningtreeveri/model.pth.tar-2') # model.load_state_dict(checkpoint['state_dict']) # optimizer.load_state_dict(checkpoint['optimizer']) # print(checkpoint['epoch']) # start_epoch=checkpoint['epoch'] # start=start_epoch ##start_epoch=resume_from_checkpoint('/home/kuru/Desktop/veri-gms-master/logg/model.pth.tar-20', model, optimizer=None) print('=> Start training') for epoch in range(start, max_epoch): print(epoch, scheduler.get_lr()[0]) #print( torch.cuda.memory_allocated(0)) losses = AverageMeter() #xent_losses = AverageMeter() htri_losses = AverageMeter() accs = AverageMeter() batch_time = AverageMeter() xent_losses = AverageMeter() model.train() for p in model.parameters(): p.requires_grad = True # open all layers end = time.time() for batch_idx, (img, label, index, pid, cid, subid, countid, newid) in enumerate(trainloader): trainX, trainY = torch.zeros( (train_batch_size * 3, 3, height, width), dtype=torch.float32), torch.zeros((train_batch_size * 3), dtype=torch.int64) #pids = torch.zeros((batch_size*3), dtype = torch.int16) #batchcount=0 for i in range(train_batch_size): if (countid[i] > 4): #batchcount=batchcount+1 #print("dfdsfs") labelx = label[i] indexx = index[i] cidx = pid[i] if indexx > len(pkl[labelx]) - 1: indexx = len(pkl[labelx]) - 1 #maxx = np.argmax(pkl[labelx][indexx]) a = pkl[labelx][indexx] minpos = np.argmin(ma.masked_where(a == 0, a)) # print(len(a)) # print(a) # print(ma.masked_where(a==0, a)) # print(labelx,index,pid,cidx,minpos) # print(np.array(data_index).shape) # print(data_index[cidx][1]) pos_dic = data_tfr_old[data_index[cidx][1] + minpos] #print('posdic', pos_dic) neg_label = int(labelx) while True: neg_label = random.choice(range(1, 770)) #print(neg_label) if neg_label is not int(labelx) and os.path.isdir( os.path.join( '/home/kuru/Desktop/veri-gms-master_noise/veriNoise_train_spanning_folder', strint(neg_label))) is True: break negative_label = strint(neg_label) neg_cid = pidx[negative_label] neg_index = random.choice( range(0, len(pkl[negative_label]))) #print(negative_label,neg_cid,neg_index,data_index[neg_cid] ) neg_dic = data_tfr_old[data_index[neg_cid][1] + neg_index] #print('negdic', neg_dic) trainX[i] = img[i] trainX[i + train_batch_size] = pos_dic[0] trainX[i + (train_batch_size * 2)] = neg_dic[0] trainY[i] = cidx trainY[i + train_batch_size] = pos_dic[3] trainY[i + (train_batch_size * 2)] = neg_dic[3] # trainY[i+train_batch_size] = pos_dic[7] # trainY[i+(train_batch_size*2)] = neg_dic[7] #break # else: # print("skiped",countid[i],subid[i],label[i]) #break #print(batchcount) trainX = trainX.cuda() trainY = trainY.cuda() outputs, features = model(trainX) xent_loss = criterion_xent(outputs[0:train_batch_size], trainY[0:train_batch_size]) htri_loss = criterion_htri(features, trainY) centerloss = CENTER_LOSS_WEIGHT * center_criterion( features, trainY) #tri_loss = ranking_loss(features) #ent_loss = xent_loss(outputs[0:batch_size], trainY[0:batch_size], num_train_pids) loss = htri_loss + xent_loss + centerloss loss = htri_loss + xent_loss optimizer.zero_grad() optimizer_center.zero_grad() loss.backward() optimizer.step() # for param in center_criterion.parameters(): # param.grad.data *= (1. /CENTER_LOSS_WEIGHT) # optimizer_center.step() for param_group in optimizer.param_groups: #print(param_group['lr'] ) lrrr = str(param_group['lr']) batch_time.update(time.time() - end) losses.update(loss.item(), trainY.size(0)) htri_losses.update(htri_loss.item(), trainY.size(0)) xent_losses.update(xent_loss.item(), trainY.size(0)) accs.update( accuracy(outputs[0:train_batch_size], trainY[0:train_batch_size])[0]) if (batch_idx) % 50 == 0: print('Train ', end=" ") print('Epoch: [{0}][{1}/{2}]\t' 'Time {batch_time.val:.3f} ({batch_time.avg:.3f})\t' 'TriLoss {loss.val:.4f} ({loss.avg:.4f})\t' 'XLoss {xloss.val:.4f} ({xloss.avg:.4f})\t' 'OveralLoss {oloss.val:.4f} ({oloss.avg:.4f})\t' 'Acc {acc.val:.2f} ({acc.avg:.2f})\t' 'lr {lrrr} \t'.format( epoch + 1, batch_idx + 1, len(trainloader), batch_time=batch_time, loss=htri_losses, xloss=xent_losses, oloss=losses, acc=accs, lrrr=lrrr, )) end = time.time() # del loss # del htri_loss # del xent_loss # del htri_losses # del losses # del outputs # del features # del accs # del trainX # del trainY scheduler.step() print('=> Test') save_checkpoint( { 'state_dict': model.state_dict(), #'rank1': rank1, 'epoch': epoch + 1, 'arch': arch, 'optimizer': optimizer.state_dict(), }, save_dir) GPUtil.showUtilization() print(torch.cuda.memory_allocated(), torch.cuda.memory_cached()) for name in target: print('Evaluating {} ...'.format(name)) queryloader = testloader_dict[name]['query'] galleryloader = testloader_dict[name]['gallery'] rank1, distmat = test(model, queryloader, galleryloader, test_batch_size, use_gpu) ranklogger.write(name, epoch + 1, rank1) rank2, distmat2 = test_rerank(model, queryloader, galleryloader, test_batch_size, use_gpu) ranklogger.write(name, epoch + 1, rank2) del queryloader del galleryloader del distmat print(torch.cuda.memory_allocated(), torch.cuda.memory_cached()) torch.cuda.empty_cache() if (epoch + 1) == max_epoch: #if (epoch + 1) % 10 == 0: print('=> Test') save_checkpoint( { 'state_dict': model.state_dict(), 'rank1': rank1, 'epoch': epoch + 1, 'arch': arch, 'optimizer': optimizer.state_dict(), }, save_dir) for name in target: print('Evaluating {} ...'.format(name)) queryloader = testloader_dict[name]['query'] galleryloader = testloader_dict[name]['gallery'] rank1, distmat = test(model, queryloader, galleryloader, test_batch_size, use_gpu) ranklogger.write(name, epoch + 1, rank1) # del queryloader # del galleryloader # del distmat if vis_rank: visualize_ranked_results( distmat, dm.return_testdataset_by_name(name), save_dir=osp.join(save_dir, 'ranked_results', name), topk=20)
else: raise err.FieldNotProvidedError except err.FieldNotProvidedError: txt.print_red( "FieldNotProvidedError: Cannot execute: not enough arguments!" ) # if the command is 'search', try to get a substring and # a string. If the substring is inside the string, print a # formatted version of the search_success_string, notifying # the user that it was successful. If it's not, notify the # user that nothing was found. If some of the fields aren't # provided, print an error message. elif commands[0].lower() == "search" or commands[0].lower() == "gss": try: if len(commands) >= 3: fn.search(commands, search_success_string) else: raise err.FieldNotProvidedError except err.FieldNotProvidedError: txt.print_red( "FieldNotProvidedError: Cannot execute: not enough arguments!" ) except err.InstanceNotFoundError: txt.print_red( "InstanceNotFoundError: No instances were found!") # if the command is 'say-options', print each option in the # say_options list. elif commands[0].lower() == "say-options" or commands[0].lower( ) == "sopt": fn.say_options() # if the command is 'time' display current time using
def main(): #GENERAL root = "/home/kuru/Desktop/veri-gms-master/" train_dir = '/home/kuru/Desktop/veri-gms-master/VeRispan/image_train/' source = {'veri'} target = {'veri'} workers = 2 height = 320 width = 320 train_sampler = 'RandomSampler' #AUGMENTATION random_erase = True jitter = True aug = True #OPTIMIZATION opt = 'adam' lr = 0.0003 weight_decay = 5e-4 momentum = 0.9 sgd_damp = 0.0 nesterov = True warmup_factor = 0.01 warmup_method = 'linear' #HYPERPARAMETER max_epoch = 80 start = 0 train_batch_size = 16 test_batch_size = 50 #SCHEDULER lr_scheduler = 'multi_step' stepsize = [30, 60] gamma = 0.1 #LOSS margin = 0.3 num_instances = 6 lambda_tri = 1 #MODEL arch = 'resnet101_ibn_a' no_pretrained = False #TEST SETTINGS load_weights = '/home/kuru/Desktop/veri-gms-master/IBN-Net_pytorch0.4.1/resnet101_ibn_a.pth' #load_weights = None start_eval = 0 eval_freq = -1 #MISC use_gpu = True use_amp = True print_freq = 50 seed = 1 resume = '' save_dir = '/home/kuru/Desktop/veri-gms-master/logapex/' gpu_id = 0 vis_rank = True query_remove = True evaluate = False dataset_kwargs = { 'source_names': source, 'target_names': target, 'root': root, 'height': height, 'width': width, 'train_batch_size': train_batch_size, 'test_batch_size': test_batch_size, 'train_sampler': train_sampler, 'random_erase': random_erase, 'color_jitter': jitter, 'color_aug': aug } transform_kwargs = { 'height': height, 'width': width, 'random_erase': random_erase, 'color_jitter': jitter, 'color_aug': aug } optimizer_kwargs = { 'optim': opt, 'lr': lr, 'weight_decay': weight_decay, 'momentum': momentum, 'sgd_dampening': sgd_damp, 'sgd_nesterov': nesterov } lr_scheduler_kwargs = { 'lr_scheduler': lr_scheduler, 'stepsize': stepsize, 'gamma': gamma } use_gpu = torch.cuda.is_available() log_name = 'log_test.txt' if evaluate else 'log_train.txt' sys.stdout = Logger(osp.join(save_dir, log_name)) print('Currently using GPU ', gpu_id) cudnn.benchmark = True print('Initializing image data manager') dataset = init_imgreid_dataset(root='/home/kuru/Desktop/veri-gms-master/', name='veri') train = [] num_train_pids = 0 num_train_cams = 0 for img_path, pid, camid in dataset.train: path = img_path[52:77] #print(path) folder = path[1:4] pid += num_train_pids camid += num_train_cams train.append((path, folder, pid, camid)) num_train_pids += dataset.num_train_pids num_train_cams += dataset.num_train_cams pid = 0 pidx = {} for img_path, pid, camid in dataset.train: path = img_path[52:77] folder = path[1:4] pidx[folder] = pid pid += 1 path = '/home/kuru/Desktop/veri-gms-master/gms/' pkl = {} entries = os.listdir(path) for name in entries: f = open((path + name), 'rb') if name == 'featureMatrix.pkl': s = name[0:13] else: s = name[0:3] pkl[s] = pickle.load(f) f.close transform_t = train_transforms(**transform_kwargs) data_tfr = vd( pkl_file='index.pkl', dataset=train, root_dir='/home/kuru/Desktop/veri-gms-master/VeRi/image_train/', transform=transform_t) trainloader = DataLoader(data_tfr, sampler=None, batch_size=train_batch_size, shuffle=True, num_workers=workers, pin_memory=False, drop_last=True) #data_tfr = vd(pkl_file='index.pkl', dataset = train, root_dir=train_dir,transform=transforms.Compose([Rescale(64),RandomCrop(32),ToTensor()])) #dataloader = DataLoader(data_tfr, batch_size=batch_size, shuffle=False, num_workers=0) print('Initializing test data manager') dm = ImageDataManager(use_gpu, **dataset_kwargs) testloader_dict = dm.return_dataloaders() print('Initializing model: {}'.format(arch)) model = models.init_model(name=arch, num_classes=num_train_pids, loss={'xent', 'htri'}, last_stride=1, pretrained=not no_pretrained, use_gpu=use_gpu) print('Model size: {:.3f} M'.format(count_num_param(model))) if load_weights is not None: print("weights loaded") load_pretrained_weights(model, load_weights) model = (model).cuda() if use_gpu else model #model = nn.DataParallel(model).cuda() if use_gpu else model optimizer = init_optimizer(model, **optimizer_kwargs) #optimizer = init_optimizer(model) model, optimizer = amp.initialize(model, optimizer, opt_level="O2", keep_batchnorm_fp32=True, loss_scale="dynamic") model = nn.DataParallel(model).cuda() if use_gpu else model scheduler = init_lr_scheduler(optimizer, **lr_scheduler_kwargs) criterion_xent = CrossEntropyLoss(num_classes=num_train_pids, use_gpu=use_gpu, label_smooth=True) criterion_htri = TripletLoss(margin=margin) ranking_loss = nn.MarginRankingLoss(margin=margin) if evaluate: print('Evaluate only') for name in target: print('Evaluating {} ...'.format(name)) queryloader = testloader_dict[name]['query'] galleryloader = testloader_dict[name]['gallery'] _, distmat = test(model, queryloader, galleryloader, train_batch_size, use_gpu, return_distmat=True) if vis_rank: visualize_ranked_results(distmat, dm.return_testdataset_by_name(name), save_dir=osp.join( save_dir, 'ranked_results', name), topk=20) return time_start = time.time() ranklogger = RankLogger(source, target) print('=> Start training') data_index = search(pkl) for epoch in range(start, max_epoch): losses = AverageMeter() #xent_losses = AverageMeter() htri_losses = AverageMeter() accs = AverageMeter() batch_time = AverageMeter() model.train() for p in model.parameters(): p.requires_grad = True # open all layers end = time.time() for batch_idx, (img, label, index, pid, cid) in enumerate(trainloader): trainX, trainY = torch.zeros( (train_batch_size * 3, 3, height, width), dtype=torch.float32), torch.zeros((train_batch_size * 3), dtype=torch.int64) #pids = torch.zeros((batch_size*3), dtype = torch.int16) for i in range(train_batch_size): labelx = str(label[i]) indexx = int(index[i]) cidx = int(pid[i]) if indexx > len(pkl[labelx]) - 1: indexx = len(pkl[labelx]) - 1 #maxx = np.argmax(pkl[labelx][indexx]) a = pkl[labelx][indexx] minpos = np.argmax(ma.masked_where(a == 0, a)) pos_dic = data_tfr[data_index[cidx][1] + minpos] neg_label = int(labelx) while True: neg_label = random.choice(range(1, 770)) if neg_label is not int(labelx) and os.path.isdir( os.path.join( '/home/kuru/Desktop/adiusb/veri-split/train', strint(neg_label))) is True: break negative_label = strint(neg_label) neg_cid = pidx[negative_label] neg_index = random.choice(range(0, len(pkl[negative_label]))) neg_dic = data_tfr[data_index[neg_cid][1] + neg_index] trainX[i] = img[i] trainX[i + train_batch_size] = pos_dic[0] trainX[i + (train_batch_size * 2)] = neg_dic[0] trainY[i] = cidx trainY[i + train_batch_size] = pos_dic[3] trainY[i + (train_batch_size * 2)] = neg_dic[3] #print("anc",labelx,'posdic', pos_dic[1],pos_dic[2],'negdic', neg_dic[1],neg_dic[2]) trainX = trainX.cuda() trainY = trainY.cuda() outputs, features = model(trainX) xent_loss = criterion_xent(outputs[0:train_batch_size], trainY[0:train_batch_size]) htri_loss = criterion_htri(features, trainY) #tri_loss = ranking_loss(features) #ent_loss = xent_loss(outputs[0:batch_size], trainY[0:batch_size], num_train_pids) loss = htri_loss + xent_loss optimizer.zero_grad() if use_amp: with amp.scale_loss(loss, optimizer) as scaled_loss: scaled_loss.backward() else: loss.backward() #loss.backward() optimizer.step() batch_time.update(time.time() - end) losses.update(loss.item(), trainY.size(0)) htri_losses.update(htri_loss.item(), trainY.size(0)) accs.update( accuracy(outputs[0:train_batch_size], trainY[0:train_batch_size])[0]) if (batch_idx) % print_freq == 0: print('Train ', end=" ") print('Epoch: [{0}][{1}/{2}]\t' 'Time {batch_time.val:.3f} ({batch_time.avg:.3f})\t' 'Loss {loss.val:.4f} ({loss.avg:.4f})\t' 'Acc {acc.val:.2f} ({acc.avg:.2f})\t'.format( epoch + 1, batch_idx + 1, len(trainloader), batch_time=batch_time, loss=htri_losses, acc=accs)) end = time.time() scheduler.step() print('=> Test') for name in target: print('Evaluating {} ...'.format(name)) queryloader = testloader_dict[name]['query'] galleryloader = testloader_dict[name]['gallery'] rank1, distmat = test(model, queryloader, galleryloader, test_batch_size, use_gpu) ranklogger.write(name, epoch + 1, rank1) rank2, distmat2 = test_rerank(model, queryloader, galleryloader, test_batch_size, use_gpu) ranklogger.write(name, epoch + 1, rank2) del queryloader del galleryloader del distmat #print(torch.cuda.memory_allocated(),torch.cuda.memory_cached()) torch.cuda.empty_cache() if (epoch + 1) == max_epoch: #if (epoch + 1) % 10 == 0: print('=> Test') save_checkpoint( { 'state_dict': model.state_dict(), 'rank1': rank1, 'epoch': epoch + 1, 'arch': arch, 'optimizer': optimizer.state_dict(), }, save_dir) if vis_rank: visualize_ranked_results(distmat, dm.return_testdataset_by_name(name), save_dir=osp.join( save_dir, 'ranked_results', name), topk=20)
from functions import findXmls, getMeetingInfosFromXml, convertBBBMeeting2mp4, addLogos, search from os.path import dirname, abspath, join import os import sys os.chdir("/home/karabinaa/Desktop/pythondenemeleri/bbbrecorder/bbb-recorder") pathList = findXmls("bbb18") query = input("Sorguyu Giriniz : ") searcher = search(query.replace("I", "ı").replace("İ", "i").lower()) matchDict = {} key = 1 for path in pathList: try: ilist = getMeetingInfosFromXml(path) except: ilist = [""] if searcher(ilist[0].replace("I", "ı").replace("İ", "i").lower()) == True: matchDict.update({key: ilist}) key += 1 if len(query) > 0: for key, value in matchDict.items(): print(str(key) + " : " + value[0]) selection = input(