import wikipediaapi import string import random from datetime import datetime from PyDictionary import PyDictionary from googletrans import Translator from os import walk import os import emojis import pyowm from pyowm import OWM dictionary = PyDictionary() #Dicionário translator = Translator() # Tradutor owm = OWM('8d7f66bc77243899a96e5ed130e400d2') mgr = owm.weather_manager() def getnote(incoming_msg): #Função para armazenar nota f = open('notes.txt', 'a') text = incoming_msg.replace('note:', '') f.write(text + '\n') f.close() return ('Thanks for your feedback!') def checkemoji(incoming_msg): #Função para verificar emoji if ':' in emojis.decode(incoming_msg): emj = emojis.decode(incoming_msg) emj = emj.replace(':', '') emjc = emojis.db.get_emoji_by_alias(emj)
def analyze(request): puncts = string.punctuation word_to_find = request.POST.get("word_input") djText = request.POST.get('text', 'default') remPunc = request.POST.get('option', 'removepunc') cap = request.POST.get('option', 'capitalize') small = request.POST.get('option', 'toSmall') upper = request.POST.get('option', 'toUpper') word_find_flag = request.POST.get('option', 'word_find') New_Line = request.POST.get('option', 'New_line') Emails = request.POST.get('option', 'Email_Address') Links = request.POST.get('option', 'Links') Passgen = request.POST.get('option', 'Password_Generator') search_word = request.POST.get('option', 'Search_word') gallery = request.POST.get('option', 'q') Suggest_word = request.POST.get('option', 'suggest_word') Sen_Analysis = request.POST.get('option', 'Sentiment') Grammar = request.POST.get('option', 'grammar') Channel = request.POST.get('option', 'suggest_youtube') books = request.POST.get('option', 'suggest_books') articles = request.POST.get('option', 'suggest_articles') lemmitizer = request.POST.get('option', 'grammar') start_pdf = request.POST.get('option', 'generate_pdf') replace_text = request.POST.get('option', 'replace') Word_cloud = request.POST.get('option', 'wordcloud') analyzed_text = "" word_status = "" countword = len(djText.split()) if word_find_flag == "word_find": if word_to_find != "": if djText.find(word_to_find) != -1: word_status = "found" word = djText.replace( word_to_find, f"""<b style="color:{"red"};">""" + word_to_find + "</b>") djText = word try: synonym_01 = get_synonyms(word_to_find) synonyms2 = random.sample(synonym_01, 4) final = "" for f in synonyms2: final += f + " , " example = get_example(word_to_find) synonyms = final + example except: synonyms = "Not Available" else: word_status = "not found" synonyms = "Text Not Found" analyzed_text = djText word_find = "Find Word = " + word_to_find synonym = format_html('<b style="color:{};">{}</b>', 'green', synonyms) result = { "analyzed_text": analyzed_text, "highlight": "Chosen word is highlighted in red colour and synonyms/examples in green colour", "purpose": word_find, "status": word_status, "synonym": synonym, "wordcount": countword, "analyze_text": True, "findWord": True } elif New_Line == "New_line": for char in djText: if char == '.': char = '\n' analyzed_text = analyzed_text + char result = { "analyzed_text": analyzed_text, "purpose": "Changes '.' to New Line", "analyze_text": True, "wordcount": countword } elif Emails == "Email_Address": regex = '^[a-z0-9]+[\._]?[a-z0-9]+[@]\w+[.]\w{2,3}$' lst = re.findall('\S+@+\S+', djText) tmp = "" for x in lst: if (re.search(regex, x)): tmp += x tmp += '\n' result = { "analyzed_text": tmp, "purpose": "Find All Emails", "analyze_text": True, "wordcount": countword } elif Passgen == "Password_Generator": stop_words = set(stopwords.words('english')) chars = "!£$%&*#@" ucase_letters = "ABCDEFGHIJKLMNOPQRSTUVWXYZ" text = re.sub(r'[^\w\s]', '', djText) token = word_tokenize(text) filtered_sentence = [] for w in token: if w not in stop_words: filtered_sentence.append(w) if len(filtered_sentence) > 0: random_word = random.choice(filtered_sentence) else: random_word = token[0] random_word = random_word.title() merge = "" for word in random_word.split(): merge+=random.choice(chars)+word[:-1]+ word[-1].upper()\ +random.choice(string.ascii_letters)+"@"+random.choice(ucase_letters)\ +random.choice(string.digits)+" " final_text = merge[:-1] result = { "analyzed_text": final_text, "purpose": "Generate password from text", "generate_text": True, "wordcount": countword } elif search_word == "Search_word": url = 'https://www.dictionary.com/browse/' headers = requests.utils.default_headers() headers.update({ 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.97 Safari/537.36' }) req = requests.get(url + djText, headers) soup = BeautifulSoup(req.content, 'html.parser') mydivs = soup.findAll("div", {"value": "1"})[0] for tags in mydivs: meaning = tags.text wrap = textwrap.TextWrapper(width=100) word_meaning = wrap.fill(text=meaning) result = { "analyzed_text": word_meaning, "purpose": "Searched Word", "generate_text": True, "wordcount": countword } elif Suggest_word == "suggest_word": find = requests.get( f"https://www.dictionaryapi.com/api/v3/references/thesaurus/json/{djText}?key={api_key}" ) response = find.json() if len(response) == 0: print("Word Not Recognized!") else: k = [] if str(response[0]).count(" ") == 0: for j in range(len(response)): k.append(response[j]) predict = " , ".join(k) djText = predict else: dictionary = PyDictionary() testdict = dictionary.synonym(djText) suggest = " , ".join(testdict) djText = suggest wrap = textwrap.TextWrapper(width=100) suggest = wrap.fill(text=djText) result = { "analyzed_text": suggest, "purpose": "Suggested Word", "generate_text": True, "wordcount": countword } elif Sen_Analysis == "Sentiment": djText = ' '.join( re.sub("(@[A-Za-z0-9]+)|([^0-9A-Za-z \t])|(\w+:\/\/\S+)", " ", djText).split()) analysis = TextBlob(djText) # set sentiment if analysis.sentiment.polarity > 0: final = str(djText) + " (Positive Text)" elif analysis.sentiment.polarity == 0: final = str(djText) + " (Neutral Text)" else: final = str(djText) + " (Negative Text)" result = { "analyzed_text": final, "purpose": "Sentiment Analysis", "analyze_text": True, "wordcount": countword } elif Grammar == "grammar": parser = GingerIt() result = parser.parse(djText) final = result["result"] if final == '': final = "Please write some text to check grammar" result = { "analyzed_text": final, "grammar": djText, "purpose": "Spelling & Grammar Check", "analyze_text": True, "wordcount": countword } elif lemmitizer == "lemmitize": wordnet_lemmatizer = WordNetLemmatizer() tokenization = nltk.word_tokenize(djText) count = True for w in tokenization: k = wordnet_lemmatizer.lemmatize(w, pos="v") if w != k: result = "{} -> {}".format( w, wordnet_lemmatizer.lemmatize(w, pos="v")) count = False if count == True: final = "No need for lemmatization" if count == False: final = "(Original word) - > (Lemmatized word)" result = { "analyzed_text": result, "highlight": final, "purpose": "Lemmatization of text", "analyze_text": True, "wordcount": countword } elif Channel == "suggest_youtube": request.session['user-input'] = djText result = { "analyzed_text": djText, "purpose": "Suggest youtube channels", "status": "Press Button To View Channel links", "find_channel": True, "generate_text": True, "wordcount": countword } elif books == "suggest_books": request.session['user-input'] = djText result = { "analyzed_text": djText, "purpose": "Search Books", "status": "Press Button To View Books", "find_books": True, "generate_text": True, "wordcount": countword } elif articles == "suggest_articles": request.session['user-input'] = djText result = { "analyzed_text": djText, "purpose": "Search Articles", "status": "Press Button To View Articles", "find_articles": True, "generate_text": True, "wordcount": countword } elif start_pdf == "generate_pdf": request.session['user-input'] = djText result = { "analyzed_text": "Check Your Pdf", "purpose": "Generate Pdf", "status": "Press Button To View Pdf", "make_pdf": True, "generate_text": True, "wordcount": countword } elif replace_text == "replace": final_text = re.sub(word_to_find, replace_input, djText) result = { "analyzed_text": final_text, "purpose": "Replacemet of text in sentence", "analyze_text": True, "wordcount": countword } elif Word_cloud == "wordcloud": cloud = WordCloud(background_color="white", max_words=200, stopwords=set(STOPWORDS)) wc = cloud.generate(djText) buf = io.BytesIO() wc.to_image().save(buf, format="png") data = base64.b64encode(buf.getbuffer()).decode("utf8") final = "data:image/png;base64,{}".format(data) result = { "analyzed_text": " ", "purpose": "Wordcloud", "my_wordcloud": final, "generate_text": True, "wordcount": countword } elif gallery == "q": request.session['user-input'] = djText result = { "analyzed_text": djText, "purpose": "Images", "status": "Press Button To View Images", "find_image": True, "generate_text": True, "wordcount": countword } elif remPunc == 'removepunc': for char in djText: if char not in puncts: analyzed_text = analyzed_text + char result = { "analyzed_text": analyzed_text, "purpose": "Remove Punctuations", "analyze_text": True, "wordcount": countword } elif cap == "capitalize": analyzed_text = djText.capitalize() result = { "analyzed_text": analyzed_text, "purpose": "Capitalize", "analyze_text": True, "wordcount": countword } elif small == "toSmall": analyzed_text = djText.lower() result = { "analyzed_text": analyzed_text, "purpose": "To Smallercase", "analyze_text": True, "wordcount": countword } elif upper == "toUpper": analyzed_text = djText.upper() result = { "analyzed_text": analyzed_text, "purpose": "To Uppercase", "analyze_text": True, "wordcount": countword } elif Links == "Links": pattern = '(?:(?:https?|ftp|file):\/\/|www\.|ftp\.)(?:\([-A-Z0-9+&@#\/%=~_|$?!:,.]*\)|[-A-Z0-9+&@#\/%=~_|$?!:,.])*(?:\([-A-Z0-9+&@#\/%=~_|$?!:,.]*\)|[A-Z0-9+&@#\/%=~_|$])' links = re.findall(pattern, djText, re.IGNORECASE) analyzed_text = "" i = 0 for x in links: i = i + 1 analyzed_text += f'<a href="{x}" target="_blank">Link {i}</a>' analyzed_text += '\n ' result = { "analyzed_text": analyzed_text, "purpose": "Find All Links", "analyze_text": True, "wordcount": countword } else: return HttpResponse( '''<script type="text/javascript">alert("Please select atleast one option.");</script>''' ) return render(request, 'analyze.html', result)
from PyDictionary import PyDictionary dict=PyDictionary() word=input("Enter the word to find the meaning: ") mean=dict.meaning(word) print(mean)
return vital_words_list def text_comparison(output_1, output_2): proximity_index = 0 sum_index = 0 for word in output_1: sum_index += word[1] for other_word in output_2: if word[0] in other_word[0] or other_word[0] in word[0]: proximity_index += word[1] + other_word[1] for other_word in output_2: sum_index += other_word[1] comparison_index = proximity_index / sum_index return comparison_index t = key_stats(file_0) b = key_stats(file_2) print(text_comparison(t, b)) dictionary = PyDictionary() # def respond(wordList): # output = "" # for word in wordList: # output = (output + " " + (random.choice(dictionary.synonym(word, "html.parser")))) # return output dictionary = PyDictionary("hotel", "ambush", "joy", "perceptive") print(dictionary.getSynonyms()) # print(respond(['chicken', 'food']))
def definition(word): word = word dictionary = PyDictionary(word) definitions = dictionary.getMeanings() definitionx = definitions[word]['Noun'] return definitionx
def uploaded_file(filename, s, e): import fitz import pytesseract pytesseract.pytesseract.tesseract_cmd = r"C:\Program Files (x86)\Tesseract-OCR\tesseract.exe" pdffile = filename doc = fitz.open('static' + '/' + filename) for i in range(int(s) - 1, int(e)): page = doc.loadPage(i) # number of page pix = page.getPixmap() output = "outfile" + str(i) + ".png" pix.writePNG(output) x = '' for i in range(int(s) - 1, int(e)): x += pytesseract.image_to_string(f'outfile{str(i)}.png') from PyDictionary import PyDictionary from summa import keywords from summa.summarizer import summarize import nltk from nltk.tokenize import sent_tokenize from docx import Document f = x b = str(filename.replace('.pdf', '')) a = x a = keywords.keywords(a) dictionary = PyDictionary() a = a.split('\n') a1 = [] for i in a: x = i.split(' ') for j in x: a1.append(j) a1.sort(key=lambda s: len(s)) a1.reverse() try: a1 = a1[:20] except: pass a = set(a1) a = tuple(a1) a1 = [] for i in range(10): try: a1.append(a[i]) except: pass from nltk.stem import WordNetLemmatizer lemmatizer = WordNetLemmatizer() a = a1 a1 = [] for i in a: a1.append(lemmatizer.lemmatize(i)) a = list(set(a1)) a1 = a a = [dictionary.meaning(i) for i in a1] z = sent_tokenize(summarize(f, ratio=0.25)) doc = Document() doc.add_heading('Notes for ' + b, 0) for i in z: doc.add_paragraph(i) doc.add_heading('Vocab Words from ' + b, 0) for i in range(len(a)): c = doc.add_paragraph(str(i + 1) + ') ') c.add_run(a1[i]).bold = True c.add_run(': ') d = str(list(a[i].values())) d = d.replace('[', '') d = d.replace(']', '') c.add_run(d) g = doc.add_paragraph('') g.add_run('Synonyms for ') g.add_run(a1[i].upper() + ': ').bold = True from datamuse import datamuse api = datamuse.Datamuse() s = api.words(ml=a1[i], max=10) s1 = [] for i in s: for j in i: if j == 'word': s1.append(i[j]) g.add_run(str(s1).replace('[', '').replace(']', '').replace("'", '')).italic = True whitelist = set('abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ') fileName = b.replace(' ', '') fileName = ''.join(filter(whitelist.__contains__, fileName)) fileName += '.docx' doc.save(fileName) import cloudmersive_convert_api_client from cloudmersive_convert_api_client.rest import ApiException configuration = cloudmersive_convert_api_client.Configuration() configuration.api_key['Apikey'] = 'f0c513bc-8c00-4491-830e-3e83b015feb6' api_instance = cloudmersive_convert_api_client.ConvertDocumentApi( cloudmersive_convert_api_client.ApiClient(configuration)) try: # Convert Word DOCX Document to PDF api_response = api_instance.convert_document_docx_to_pdf(fileName) file = open('static/' + fileName.replace('.docx', '.pdf'), 'wb') file.write(api_response) file.close() except ApiException as e: print( "Exception when calling ConvertDocumentApi->convert_document_docx_to_pdf: %s\n" % e) myFile = fileName.replace('.docx', '.pdf') myFile2 = myFile note = Note(noteFile=str(myFile2), creator=current_user) db.session.add(note) db.session.commit() myFile = url_for('.static', filename=myFile) return render_template('notes.html', myFile=myFile)
def save_word(new_word, target_lang, native_lang): translator = google_translator() dictionary = PyDictionary(new_word) definition = '' wordClass = '' try: if dictionary.getMeanings()[new_word] is None: print("there are no dictionary meanings") l_ref = '' if (target_lang == "English"): l_ref = 'en' if (target_lang == "Spanish"): l_ref = 'es' if (target_lang == "French"): l_ref = 'fr' if (target_lang == "Russian"): l_ref = 'ru' # backup using oxford dictionary # has a 1000 request limit so is ONLY to be used as a fallback in case PyDictionary doesn't work like it # sometimes does app_id = '357c2725' app_key = 'ec311ce6a30cde29b5a736a5301f0d9c' url = 'https://od-api.oxforddictionaries.com/api/v2/entries/' + l_ref + '/' + new_word.lower() + '?' + \ "fields=definitions" r = requests.get(url, headers={ 'app_id': app_id, 'app_key': app_key }) print(r) stage1 = json.dumps(r.json()).split('definitions') print(stage1) check = str(stage1).split('"') #error checking for if there's no translation if check[1] == 'error': definition = 'None' wordClass = "None" else: wordStage1 = json.dumps(r.json()).split('lexicalCategory') wordStage2 = str(wordStage1[1]).split('"text": ') wordClass = str(wordStage2[1]).split('}')[0] # stage1[1] = re.sub('[^\\w-]+', '', stage1[1]) stage2 = str(stage1).split('id') defRes = stage2[0] for char in defRes: if char.isalnum(): definition += char elif char == ' ': definition += char definition = definition[1:] else: definition = str(dictionary.getMeanings()).split('[') wordClass = definition[0] wordClass = wordClass.split(':') wordClass = str(wordClass).split('{') wordClass = wordClass[2] wordClass = str(wordClass).split('"') wordClass = wordClass[0] definition = definition[1].split('"') definition = definition[0] definition = str(definition).split(']') definition = definition[0] definition = definition[1:] except: pass print("made it here") # english word processing result = translator.translate(new_word, lang_tgt='en') #####Temporarily uses English######## # result = new_word if isinstance(result, list): result = result[0] result = re.sub("[^a-zA-Z']+", '', result) en_word = EnglishWord(word=result, definition=translator.translate(definition, 'en'), word_class=translator.translate(wordClass, 'en')) print("The result is " + result) en_word.save() # spanish word processing result1 = translator.translate(new_word, lang_tgt='es') if isinstance(result1, list): result1 = result1[0] #this line is and error and cuts out special characters in the spanish alphabet # result1 = re.sub("[^-_/.,\\p{L}0-9 ]+", '', result1) # result1 = re.sub("[^a-zA-Z']+", '', result1) spa_word = SpanishWord(word=result1, definition=translator.translate(definition, 'es'), word_class=translator.translate(wordClass, 'es')) spa_word.save() # french word processing result2 = translator.translate(new_word, lang_tgt='fr') if isinstance(result2, list): result2 = result2[0] result2 = re.sub("[^a-zA-Z']+", '', result2) fr_word = FrenchWord(word=result2, definition=translator.translate(definition, 'fr'), word_class=translator.translate(wordClass, 'fr')) fr_word.save() # russian word processing result3 = translator.translate(new_word, lang_tgt='ru') if isinstance(result3, list): result3 = result3[0] result3 = re.sub("[^\\w-]+", '', result3) rus_word = RussianWord(word=result3, definition=translator.translate(definition, 'ru'), word_class=translator.translate(wordClass, 'ru')) rus_word.save() add_master_dict( EnglishWord.objects.filter(word=result).first(), SpanishWord.objects.filter(word=result1).first(), FrenchWord.objects.filter(word=result2).first(), RussianWord.objects.filter(word=result3).first()) # print(native_lang) # print(result3) if native_lang == "English": print("saved word (EN)") return result if native_lang == "Spanish": print("saved word (SP)") return result1 if native_lang == "French": print("saved word (FR)") return result2 if native_lang == "Russian": print("saved word (RU)") return result3
def answer(message, admin): sleep = ["Programmer asleep... come back later", "Ask me again... I'm tired!", "Random stuff is not actually random... ask again!", "Loading please wait... ask later.", "Closed. Come back soon!", "I'm rude, so ask later.", "Python died. Ask later.", "Hold on. Let me Google it. Ask me later", "Go away. I do not wish to answer at this time.", ] noQ = ["That is no question...", "I could write an actual question when I was 3 years old.", "You no grammar? < an actual question", "How dumb are you? question = ?", "Come back after you learn what a question consists of!"] noSmart = ["That's an odd question.", "I'm sorry, I don't understand.", "Such a random question.", "Why do you ask that?", "Please consider whether you can answer your own question.", "Perhaps the answer lies within yourself?", "Why don't you tell me?", "Wow! What a weird question.","Can you elaborate on that?","Very interesting. I'll leave that up to you!", "Did you try Googling it?", "Wow... Just wow.", "Elaborate please", "Why am I here? ...", "...okay... try again please.", "I'm giving up. Is this really a question???", "What does this really mean?"] #["kw birthday", "I was first functioning as a basic answer bot in early 2017."] #simpler questions below, eg can you and can i before can #key words (kw) ususally comes first #"kwhello" -> hello must be first word in question #"kw hello" -> must not be first word # can have up to 2 key words in first 2 options of list questions = [ ["kw source", "kw your", "I have a personal built in database but I also look up my facts on Wikipedia and reference definitions of words using a dictionary."], ["kw you", "kwwho", "I am an AI that answers your questions to the best of my ability developed by James."], ["kw your", "kw name", "I don't have a name, " + random.choice(["I guess", "If you want"]) + " you can call me Test Dummy as that is my originated Discord bot name."], ["kw your", "kw creator", "My creator is James. He programmed me from scratch. I try my best to help you answer your question."], ["kw coin", "kwflip", "I choose " + random.choice(["heads.", "tails."]), "Okay... " + random.choice(["heads!", "tails!"]), "I'll flip a " + random.choice(["penny", "dime", "quarter", "loonie", "nickel", "toonie", "one-hundred dollar bill"]) + ". I got " + random.choice(["heads!", "tails!"])], ["kw time", "kwwhat", random.choice(["It's", "It's about", "I think it's"]) + " time to get a watch."], ["kw dice", "kwroll", "I rolled a " + str(random.randint(1,6)) + ".", "Rolling... I got " + str(random.randint(1,6)) + "."], ["kwhello", "Hello... I'm glad you could drop by today.", "Hi there... how are you today?", "Hello, how are you feeling today?", "Hello there!", "Not really a question... but hi!", "Hello, it's me, I was wondering if after all these years you'd like to meet, To go over everything, They say that time's supposed to heal ya, But I ain't done much healing."], ["kw or ", "Why don't you choose?", "Choose using your best judgement!", "Always choose the first option!", "It depends on conditions to make the best decision.", "Make an educated guess.", "Why not use //choose?"], ["kwmark", str(random.randint(80,100)) + "%... that's a" + random.choice(["n excellent", " great", " good"]) + " mark.", str(random.randint(70,80)) + "%... that's a" + random.choice([" decent", " alright", "n okay"]) + " mark.", str(random.randint(0,70)) + "%..." + random.choice([" let's not talk about it.", " I think you can do better.", " there is always next time.", " you tried your best right?"]), "It's " + random.choice(["around ", "something like ", "about "]) + str(random.randint(70,100)) + "%"], ["kwprime number", "A prime number is a whole number greater than 1, whose only two whole-number factors are 1 and itself."], ["kwwill", "It could be very so.", "That's hard to answer.", "I cannot predict the future!", "Probably.", "It depends if you like bad or good news."], ["can you", "What makes you think I can't %s?", "If I could %s, then what?", "Why do you ask if I can %s?"], ["can i", "Perhaps you don't want to %s.", "Do you want to be able to %s?", "If you could %s, would you?", "Yes you can %s.", "Sorry, I don't think you can %s.", "I wouldn't count on it.", "Not with that math mark!"], ["can", "Yes.", "No.", "Maybe.", "It can be quite possible.", "Not with that math mark!", "I don't understand you!", "How immature!", "Hmmm... that's tough!"], ["do i need", "Why do you need %s?", "Would it really help you to get %s?", "Are you sure you need %s?"], ["do you think it is", "Perhaps it's %s -- what do you think?", "If it were %s, what would you do?", "It could well be that it is %s."], ["are you", "Why does it matter whether I am %s?", "Would you prefer it if I were not %s?", "Perhaps you believe I am %s.", "I may be %s -- what do you think?"], ["what do you think", "What do *you* think about %s?", "My opinion is hard to create.", "Opinions are hard...", "To further create an opinion, I will need resources."], ["what", "Why do you ask?", "How would an answer to that help you?", "What do you think?", "If you think about it, you will know."], ["how many", str(random.randint(0,100)) + " to be exact."], ["how much", str(random.randint(0,100)) + " to be exact."], ["how are", "Very well.", "It's okay.", "Very fine.", "Fine.", "Good.", "Great.", "Excellent.", "Okay.", "So-so."], ["how come", "That's hard to say.", "Why do you think?"], ["how", "How do you suppose?", "Perhaps you can answer your own question.", "What is it you're really asking?"], ["why dont you", "Do you really think I don't %s?", "Perhaps eventually I will %s.", "Do you really want me to %s?"], ['why cant i', "Do you think you should be able to %s?", "If you could %s, what would you do?", "I don't know - why can't you %s?", "Have you really tried?"], ["why is", "Why do *you* think?", "That requires a lot of observations to support that... to the Google!", "Proof must be provided to create a conclusion!"], ["why do you think", "Why do *you* think %s?", "That requires a lot of observations to support that... to the Google!", "Proof must be provided to create a conclusion!"], ["why do", "Why don't you tell me the reason why %s?", "Why do you think %s?"], ["why does", "Why don't you tell me the reason why %s?", "Why do you think %s?"], ["why", "Why don't you tell me the reason why %s?", "Why do you think %s?"], ["i dont", "Don't you really %s?", "Why don't you %s?","Do you want to %s?"], ["is there", "Do you think there is %s?", "It's likely that there is %s.", "Would you like there to be %s?"], ["where", "Use some logic to find out where.", "To the google maps!", "You should reference a map!"], ["did", "I don't know much history.", "Google all the history, it's easier!"], ["who is", "Is %s rich?", "Do you know %s?", "I think %s is a" + random.choice([" nice", " great", " good", "n okay", " not that nice of a"]) + " person."], ["who", "Are they rich?", "I'm not a people person.", "I don't know much about anyone... I'm all alone."], ["is", "Yes.", "No.", "Maybe.", "It can be quite possible.", "I don't understand you!", "Hmmm... that's tough!"], ["does", "Yes.", "No.", "Maybe.", "It can be quite possible.", "I don't understand you!", "Hmmm... that's tough!"], ["when", "Soon.", "Can you estimate?", "In about " + str(random.randint(5,60)) + " minutes.", "It's going to be a while."], ["am i", "If you think so!", "Yes.", "No.", "Maybe.", "It can be quite possible.", "I don't understand you!", "Hmmm... that's tough!"], ["do you think", "Do you think %s?", "My opinion is hard to create.", "Opinions are hard..."], ["should", "Yes.", "No.", "Maybe.", "Do you think %s?", "My opinion is hard to create.", "Opinions are hard...", "That requires a lot of observations to support that... to the Google!", "Proof must be provided to create a conclusion!"], ["on a scale of ", "How many significant digits should I assume for the scale?", "Well, I would say around " + str(random.randint(0,10)) + ".", "Maybe about " + str(random.randint(0,10)) + ".", "Exactly " + str(random.randint(0,10)) + ".", "Go low... " + str(random.randint(0,3)) + "."] ] numbers = { "one" : 1, "two" : 2, "three" : 3, "four" : 4, "five" : 5, "six" : 6, "seven" : 7, "eight" : 8, "nine" : 9, "ten" : 10, "eleven" : 11, "twelve" : 12, "thirteen" : 13, "fourteen" : 14, "fifteen" : 15, "sixteen" : 16, "seventeen" : 17, "eighteen" : 18, "nineteen" : 19, "twenty" : 20, "thirty" : 30, "forty" : 40, "fifty" : 50, "sixty" : 60, "seventy" : 70, "eighty" : 80, "ninety" : 90, "plus" : ")+(", "and" : "+", "point" : ".", "divided" : ")/(", "minus" : ")-(", "times" : ")*(", "squared" : ")**2", "cubed" : ")**3", "exponent" : ")**", "^" : "**", "raised" : ")**", "power" : ")**", "hundred" : "*100", "thousand" : "*1000", "million" : "*1000000", "billion" : "*1000000000", "trillion" : "*1000000000000" } numbersOnly = ["1","2","3","4","5","6","7","8","9","0","one","two","three","four","five","six","seven","eight","nine","zero", "ten","eleven","twelve","thirteen","fourteen","fifteen","sixteen","seventeen","eighteen","nineteen","twenty","thirty", "forty","fifty","sixty","seventy","eighty","ninety", "hundred", "thousand", "million", "billion", "trillion"] largeNumbersOnly = ["twenty","thirty","forty","fifty","sixty","seventy","eighty","ninety", "hundred", "thousand", "million", "billion", "trillion"] operationsOnly = ["power", "raised", "exponent", "cubed", "squared", "times", "minus", "divided", "point", "+", "*", "-", "/"] exponentialOnly = ["power", "raised", "exponent", "cubed", "squared"] accomplish = False message = message.replace("'", "") seed = 0 #for letter in message: # seed += ord(letter) calculateConfidence = 0 exponentCount = 0 for number in numbersOnly: if number in message: calculateConfidence += 1 for operation in operationsOnly: if operation in message: calculateConfidence += 1 for word in message.split(): if word in exponentialOnly: exponentCount += 1 if calculateConfidence >= 2: message = message.replace("?", " ?") solve = "("+ "("*exponentCount previous = False for word in message.split(): if previous == True and word in numbersOnly and word not in largeNumbersOnly: solve+=("+") if word in numbersOnly: previous = True else: previous = False #print(solve) for operation in ["+","-","*","**","/"]: if operation in word: solve+=str(word) continue try: solve+=str(int(word)) except: try: solve+=str(float(word)) except: try: solve+=str(numbers[word]) except: continue try: solveDisplay = solve.replace("**", "^") solveDisplay = solveDisplay.replace("/", "\u00F7") solveDisplay = solveDisplay.replace("*", "\u00D7") solveDisplay = solveDisplay + str(")") return(solveDisplay + "=" + str(eval(solve+")"))) except: if " or " in message: message = message.replace("?", "") message = message.replace(",", "") splitQuestion = message.split() searchIndex = -1 for word in splitQuestion: searchIndex += 1 if word == "or": break return(random.choice(["I choose the first: " + splitQuestion[searchIndex-1], "I choose the second: " + splitQuestion[searchIndex+1], "I choose " + splitQuestion[searchIndex-1], "I choose " + splitQuestion[searchIndex+1]])+ ".") #if "?" not in message: #return random.choice(noQ) elif random.randint(0,20) == 1: return random.choice(sleep) #elif "james" in message.lower() or "greenslime" in message.lower(): #return "We do not talk about my creator!" else: #random.seed(seed) wiki = True define = False noWiki = ["you", "my", "your", "me", "am", "i", "I", "mark", "time"] message = message.replace("?", "") messageWords = message.split() for word in messageWords: if word == "mean": define = True if word in noWiki: wiki = False #print(word) if define == True: dictionary=PyDictionary() if dictionary.meaning(messageWords[2]) == None: return "Sorry I can't seem to find a definition for that." else: return str(dictionary.meaning(messageWords[2])) if (message.lower().startswith("what") or message.lower().startswith("who")) and wiki == True: try: wikiResult = (str(wikipedia.summary(message[7:])).split()) sentenceIndex = 0 randomEnd = [] randomEndIndex = 0 for word in wikiResult: sentenceIndex+=1 if sentenceIndex >= 100: break elif word[-1] == ".": randomEnd.append(sentenceIndex) randomEndIndex = int(random.choice(randomEnd)) #print(randomEndIndex) sentenceIndex = 0 wikiResultReturn = [] for word in wikiResult: sentenceIndex += 1 wikiResultReturn.append(word) if sentenceIndex == randomEndIndex: break return ' '.join(wikiResultReturn) except wikipedia.exceptions.DisambiguationError as exception: multipleResults = "What are you talking about? " + ", ".join(exception.options) return multipleResults except wikipedia.exceptions.PageError: x = 1 for row in range(len(questions)): if questions[row][1][0:2] == "kw": if (questions[row][1][2:] in message.lower()) and (questions[row][0][2:] in message.lower()): return random.choice(questions[row][2:]) elif questions[row][0][0:2] == "kw": if questions[row][0][2:] in message.lower(): return random.choice(questions[row][1:]) elif message.lower().startswith(questions[row][0]): oldsubject = message[(len(questions[row][0])+1):] subject = oldsubject.replace("?", "") subject = subject.replace(" my ", " your ") subject = subject.replace(" me ", " you ") subject = subject.replace(" i ", " you ") subject = subject.replace(" ill ", " you'll ") choice = random.choice(questions[row][1:]) if "%" in choice: return choice % (subject) else: return choice #print("%" in choice) accomplish = True if accomplish == False: #if admin == True: # urls = [] # for url in search(message, tld='ca', lang='en', stop=1): # urls.append(url) # urlResult = "This may help: " + urls[0] # return(urlResult) #else: return random.choice(noSmart)
def __init__(self): self.dbclient = db.Ecampusdb() self.stop_words = (set(stopwords.words('english'))) self.dictionary = PyDictionary()
import bs4 import requests from bs4 import BeautifulSoup as bs from urllib import request import re import os import urllib from nltk.corpus import wordnet from datamuse import datamuse from docx import Document import csv from PyDictionary import PyDictionary dictionary = PyDictionary() api = datamuse.Datamuse() doc_path = "C:/Users/Andrew/Documents/Vocab/Adjective-Stage2-Vocab.docx" with open('C:/Users/Andrew/Documents/Vocab/AdjYear234list.csv', 'r') as file: reader = csv.reader(file) for row in reader: word = ''.join(row) print(word) my_doc = Document(doc_path) try: # get word word_form = wordnet.synsets(word)[0].pos() print(word + ":[" + word_form + ".]") my_doc.add_heading(word, 3) # get part of speech dictionary = PyDictionary(word)
from itertools import permutations as pm from collections import Counter from PyDictionary import PyDictionary from bisect import bisect_left import numpy as np import random import re import string import pickle EngDict = PyDictionary() AddOn = [(1, 0), (0, 1), (-1, 0), (0, -1)] #For surrounding tiles WWF = True #Playing Words With Friends or traditional Scrabble Small = False #Small or large board MaxTile = 7 GameName = 'test4' #def binsrch2(lst, item): #return (item <= lst[-1]) and (lst[bisect_left(lst, item)] == item) STileValues = { "a": 1, "c": 3, "b": 3, "e": 1, "d": 2, "g": 2, "f": 4, "i": 1, "h": 4,
'''Get definitions for a word''' # Do a "pip install PyDictionary" from command line first. from PyDictionary import PyDictionary DICT = PyDictionary() CHECK_WORD = "house" print(DICT.meaning(CHECK_WORD)) print("-----------") print(DICT.synonym(CHECK_WORD)) print("-----------") print(DICT.antonym(CHECK_WORD)) print("-----------") print("French translation:") print(DICT.translate(CHECK_WORD, 'fr')) # Note ignore the warnings, this wouldn't show up in a GUI,
def write(verbose = 'N', test = True): words = dict() sys.stdout = open(os.devnull, 'w') dictionary = PyDictionary('features="html.parser"') sys.stdout = sys.__stdout__ cont = 'y' try: with open("words.json", "r+") as f: try: prev_words = json.load(f) except: prev_words = dict() while cont[0].upper() != 'N': if cont == 'y': wordList = input("Enter the word (if >1 space separated): ").split() else: wordList = cont wordList = [i.upper() for i in wordList] for word in wordList: if word in prev_words.keys(): print(f"\nYou have encountered the word {word}") print(f"{colored('Try and remember', 'red')}\n") r = False if test == False: wait = 'y' while wait.upper()=='Y': time.sleep(10) if input("Remember ?: ").upper() == 'Y': wait = 'n' r = True if input("Cannot remember ? Want to look at the meaning ?(Y/N): ").upper() == 'Y': r = False break if not r: nr = open("Not Remember", "r+") nrList = nr.read().strip().split("\n") if word not in nrList: nr.write(word.upper()) nr.write("\n") nr.close() del nr del nrList if test: print(f"{colored('Bad luck look it up later', 'cyan')} \N{disappointed face}") else: print("Heres the meaning: ") print("\n=============================\n") print(colored(word, 'green')) parse_dict(len(word), None, prev_words[word]) print("\n=============================\n") else: sys.stdout = open(os.devnull, 'w') meaning = dictionary.meaning(word) synonyms = dictionary.synonym(word) antonyms = dictionary.antonym(word) sys.stdout = sys.__stdout__ if meaning is None: print(f"{colored(f'We dont have the word {word}, !!! sorry', 'red')} \N{disappointed face}") nf = open("Not Found", 'a+') nfList = set(nf.read().strip().split("\n")) if word not in nfList: nf.write(word.upper()) nf.write("\n") nf.close() del nf del nfList else: words[word] = dict() words[word]["meaning"] = meaning words[word]["synonym"] = synonyms words[word]["antonyms"] = antonyms if verbose == 'Y': print("Heres the meaning: ") print("\n=============================\n") print(colored(word, 'green')) parse_dict(len(word), None, words[word]) print("\n=============================\n") else: print(f"{colored(f'We found the word {word}', 'green')} \N{grinning face}") cont = input("New word (if >1 space separated): ").split() prev_words.update(words) f.seek(0) try: json.dump(prev_words, f,indent=2, sort_keys=True) f.close() except: print("could not write to file") prev_words = json.dumps(prev_words, indent=2) print("Printing on console copy to the file") print("\n\n=========================================\n") print(prev_words) print("=========================================\n") f.close() except EnvironmentError as e: print(f"Something went wrong\n {e}") return
def __init__(self): self.dictionary = PyDictionary()
def setup_pydictionary(self): self.dictionary = PyDictionary()
def get_meaning(word): dictionary = PyDictionary() return dictionary.meaning(word)
def notes(): from PyDictionary import PyDictionary from summa import keywords from summa.summarizer import summarize import nltk from nltk.tokenize import sent_tokenize from newspaper import Article from docx import Document url = str(request.form['link']) a = Article(url) a.download() a.parse() f = a.text b = a.title a = a.text a = keywords.keywords(a) dictionary = PyDictionary() a = a.split('\n') a1 = [] for i in a: x = i.split(' ') for j in x: a1.append(j) a1.sort(key=lambda s: len(s)) a1.reverse() try: a1 = a1[:20] except: pass a = set(a1) a = tuple(a1) a1 = [] for i in range(10): try: a1.append(a[i]) except: pass from nltk.stem import WordNetLemmatizer lemmatizer = WordNetLemmatizer() a = a1 a1 = [] for i in a: a1.append(lemmatizer.lemmatize(i)) a = list(set(a1)) a1 = a a = [dictionary.meaning(i) for i in a1] z = sent_tokenize(summarize(f, ratio=0.25)) doc = Document() doc.add_heading('Notes for ' + b, 0) for i in z: doc.add_paragraph(i) doc.add_heading('Vocab Words from ' + b, 0) for i in range(len(a)): c = doc.add_paragraph(str(i + 1) + ') ') c.add_run(a1[i]).bold = True c.add_run(': ') d = str(list(a[i].values())) d = d.replace('[', '') d = d.replace(']', '') c.add_run(d) g = doc.add_paragraph('') g.add_run('Synonyms for ') g.add_run(a1[i].upper() + ': ').bold = True from datamuse import datamuse api = datamuse.Datamuse() s = api.words(ml=a1[i], max=10) s1 = [] for i in s: for j in i: if j == 'word': s1.append(i[j]) g.add_run(str(s1).replace('[', '').replace(']', '').replace("'", '')).italic = True whitelist = set('abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ') fileName = b.replace(' ', '') fileName = ''.join(filter(whitelist.__contains__, fileName)) fileName += '.docx' doc.save(fileName) import cloudmersive_convert_api_client from cloudmersive_convert_api_client.rest import ApiException configuration = cloudmersive_convert_api_client.Configuration() configuration.api_key['Apikey'] = 'f0c513bc-8c00-4491-830e-3e83b015feb6' api_instance = cloudmersive_convert_api_client.ConvertDocumentApi( cloudmersive_convert_api_client.ApiClient(configuration)) try: # Convert Word DOCX Document to PDF api_response = api_instance.convert_document_docx_to_pdf(fileName) file = open('static/' + fileName.replace('.docx', '.pdf'), 'wb') file.write(api_response) file.close() except ApiException as e: print( "Exception when calling ConvertDocumentApi->convert_document_docx_to_pdf: %s\n" % e) myFile = fileName.replace('.docx', '.pdf') myFile2 = myFile note = Note(noteFile=str(myFile2), creator=current_user) db.session.add(note) db.session.commit() myFile = url_for('.static', filename=myFile) return render_template('notes.html', myFile=myFile)
def antonym(): word_get = word.get() dictionary = PyDictionary() answer = dictionary.antonym(word_get) meaning_text.insert('end', answer)
def run_assistant(): # received_command = take_command() received_command = input("enter u'r commands here: ") print(received_command) if 'command not received' in received_command: no_command_received = True else: no_command_received = False while no_command_received: print('no command received') received_command = take_command() if 'play' in received_command: import pywhatkit song = received_command.replace('play', '') talk('playing ' + song + 'song') pywhatkit.playonyt(song) elif 'who created you' in received_command: talk('I was created by yeskejji yelil') elif 'your name' in received_command: talk('My name is yeskejji yelil') elif 'time' in received_command: import datetime time = datetime.datetime.now().strftime('%I:%M %p') talk('The time is now' + time) print(time) elif 'who is' in received_command: try: person = received_command.replace('who is', '') info = wikipedia.summary(person, 1) print(info) talk(info) except Exception as e: print(e) print("Not found in wikipedia") try: app_id = "HK8VGY-K22567Q59V" question = received_command.replace('ask', '') client = wolframalpha.Client('R2K75H-7ELALHR35X') res = client.query(question) answer = next(res.results).text print(answer) talk(answer) except Exception as e: print(e) print('I cant understand what u are speaking') google_search(received_command) elif 'joke' in received_command: joke = pyjokes.get_joke() talk(joke) print(joke) elif 'weather' in received_command: print('ok') city_name = last_word(received_command) from SKGEzhil_Voice_Assistant.script import weather weather.weather_report(city_name) print('weatherrr') elif 'event' in received_command: if 'what' in received_command: google_calendar.list_events() else: from SKGEzhil_Voice_Assistant.script import calendar_commands calendar_commands.calendar_commands() elif 'news' in received_command: from SKGEzhil_Voice_Assistant.script import news news.news_report() elif 'mail' in received_command: from SKGEzhil_Voice_Assistant.script import mail mail.read_mail() elif 'inbox' in received_command: from SKGEzhil_Voice_Assistant.script import mail mail.read_mail() elif 'google' in received_command: google_search(received_command) elif 'mean' in received_command: from PyDictionary import PyDictionary dictionary = PyDictionary() meaning = dictionary.meaning(last_word(received_command)) print(meaning) y = meaning["Noun"] for i in y: print(i) talk(i) break elif 'score' in received_command: from SKGEzhil_Voice_Assistant.script import cricket cricket.cricket_score() elif 'remember that' in received_command: from SKGEzhil_Voice_Assistant.script import remember received_command = received_command.replace('remember that', '') remember.remember(received_command) elif 'kept' in received_command: if 'remember' not in received_command: from SKGEzhil_Voice_Assistant.script import remember key_word = last_word(received_command) remember.retrieve(key_word) else: from SKGEzhil_Voice_Assistant.script import remember received_command = received_command.replace('remember that', '') remember.remember(received_command) elif 'alarm' in received_command: from SKGEzhil_Voice_Assistant.script import alarm alarm.create_alarm(received_command) elif 'wake' in received_command: from SKGEzhil_Voice_Assistant.script import alarm alarm.create_alarm(received_command) elif 'remind' in received_command: from SKGEzhil_Voice_Assistant.script import reminder reminder.create_reminder(received_command) else: try: question = received_command.replace('ask', '') client = wolframalpha.Client(config.wolframalpha_api) res = client.query(question) answer = next(res.results).text print(answer) talk(answer) except Exception as e: print(e) print('I cant understand what u are speaking') google_search(received_command) logs(received_command)
for i in mean_keys: # For different keys in the dict(mean), print the key print('(', i, ')', sep='') # e.g.: Noun, Verb, Adjective for j in range(len(mean[i])): # and print the data for that respective key, line by line print('\t', j+1, '. ', mean[i][j], sep='') print() def find_synonyms(word): # Function to find the synonyms of the word syn = dict.synonym(word) # list containing the synonyms print('(Synonyms)\n\t', end='') for i in syn: print(i, end=', ') print('\n') def find_antonyms(word): # Function to find the antonyms of the word ant = dict.antonym(word) # list containing the antonyms print('(Antonyms)\n\t', end='') for i in ant: print(i, end=', ') print('\n') dict = PyDictionary() # Creating an instance of the module word = input('Enter a word: ') print() find_meaning(word) find_synonyms(word) find_antonyms(word)
def get_tools(lst, ingredients, title): ingr = set() noun_phrases = set() ingrsList = list(ingredients.keys()) ingrsList = [x.replace(",", "").replace("-", " ") for x in ingrsList] goodWords = ["used for", "used to", "utensil", "tool"] [ingr.add(word.lower()) for sent in ingrsList for word in sent.split(" ")] [ingr.add(word.lower()) for word in title.split(" ")] for step in lst: doc = nlp(step) nps = [] for np in doc.noun_chunks: words = np.text.replace(",", "").split(" ") flag = False for w in words: if w in ingr or re.search('\d', w) or re.search( '[A-Z]+[a-z]+$', w): flag = True break if not flag: nps.append(np) [noun_phrases.add(strip_preps(x)) for x in nps] noun_dict = {} for key in noun_phrases: noun_dict[key] = set(key.split(" ")) for np in noun_dict.keys(): np_set = set(np.split(" ")) for np1 in noun_dict.keys(): if np_set != noun_dict[np1]: common_set = np_set & noun_dict[np1] if common_set != set(): if np in noun_phrases: noun_phrases.remove(np) if np1 in noun_phrases: noun_phrases.remove(np1) noun_phrases.add(' '.join([ word for word in np.split(" ") if word in common_set ])) for i in ingrsList: if np in i and i in noun_phrases: noun_phrases.remove(i) dictionary = PyDictionary() np_temp = set(noun_phrases) for np in np_temp: word = np.split(" ")[-1] meaningsDict = dictionary.meaning(word) if meaningsDict: if 'Noun' in meaningsDict.keys(): meaningsNoun = meaningsDict['Noun'] flag = [ True for m in meaningsNoun for gw in goodWords if gw in m ] if flag == []: noun_phrases.remove(np) else: noun_phrases.remove(np) else: noun_phrases.remove(np) return [x for x in noun_phrases if len(x) > 1]
from django.shortcuts import render from django.contrib.auth import get_user_model from django.contrib.auth.models import User from .models import Word, Meaning, Note import json, ast import datamuse from PyDictionary import PyDictionary api = datamuse.Datamuse() dic = PyDictionary() # Create your views here. def search(request): context = {} context['save'] = False if request.method == 'POST': term = request.POST['term'] if term != '': context['term'] = term similar = api.words(ml=term, max=5) context['similar'] = similar context['term'] = term definition = dic.meaning(term) temp = [] if definition: for dkey in definition: temp.append((dkey, definition[dkey])) else: temp = ['Meaning Not Found'] context['definition'] = temp
else: break print("Adding word:", word) print("Definition:", defn) new_entry = pd.DataFrame([[word, defn]], columns=d.columns) d = pd.concat([d, new_entry], ignore_index=True) d.sort_values('Word', inplace=True) d.reset_index(drop=True, inplace=True) write_dictionary_to_file(d) return (d) client_1 = ("PyDictionary", PyDictionary()) ## init word api 2 apiUrl = 'http://api.wordnik.com/v4' apiKey = '263a2b19c795b9844520302bf530266a76754c313e57a5b2d' client_2 = ("Wordnik", swagger.ApiClient(apiKey, apiUrl)) clients = dict([client_1, client_2]) ## get list of all words in the dictionary d = read_dictionary_from_file() word_list = d.Word print("Words in Dict:", len(d)) ## for each word, delete the word and then call add_card() for word in word_list:
def getFullMeaning(self): dictionary = PyDictionary(self.word) meaning = dictionary.getMeanings()[self.word] return meaning
from PyDictionary import PyDictionary d = PyDictionary() def meaning(word): answer = '' try: m = d.meaning(word) for k, v in m.iteritems(): answer += '[b]' + k + ':[/b] ' for x in v: answer += x + '\n' except Exception as e: answer = 'no such word' return answer
def processRequest(req): #for wolfram alpha if req.get("result").get("action") == "fact": client = wolframalpha.Client("4393W5-W6E838H957") john = client.query(req.get("result").get("resolvedQuery")) answer = next(john.results).text return { "speech": answer, "displayText": answer, "source": "From wolfram_alpha" } #translator #uses microsoft translator api USE your key here elif req.get("result").get("action") == "tran": translator = Translator( '''jkthaha''', '''syosNIlEOJnlLByQGcMS+AIin0iaNERaQVltQvJS6Jg=''') try: s = translator.translate( req.get("result").get("parameters").get("question"), req.get("result").get("parameters").get("language")) res = makeWebhookResult(s) return res except: res = makeWebhookResult("Server busy, please try again later") return res #for news #takes news randomly from different sources use newsapi docs for more info elif req.get("result").get("action") == "news": y = random.randint(1, 6) if y == 1: r = requests.get( 'https://newsapi.org/v1/articles?source=bbc-news&sortBy=top&apiKey=1412588264c447da83a7c75f1749d6e8' ) j = r.json() x = j.get('articles') newp = "The headlines are: " + "1. " + x[0][ "title"] + "." + " 2. " + x[1]["title"] + "." + " 3. " + x[2][ "title"] + "." + " 4. " + x[3]["title"] + "." + " 5. " + x[ 4]["title"] + "." res = makeWebhookResult(newp) return res elif y == 2: r = requests.get( 'https://newsapi.org/v1/articles?source=the-times-of-india&sortBy=latest&apiKey=1412588264c447da83a7c75f1749d6e8' ) j = r.json() x = j.get('articles') newp = "The headlines are: " + "1. " + x[0][ "title"] + "." + " 2. " + x[1]["title"] + "." + " 3. " + x[2][ "title"] + "." + " 4. " + x[3]["title"] + "." + " 5. " + x[ 4]["title"] + "." res = makeWebhookResult(newp) return res elif y == 3: r = requests.get( 'https://newsapi.org/v1/articles?source=independent&sortBy=top&apiKey=1412588264c447da83a7c75f1749d6e8' ) j = r.json() x = j.get('articles') newp = "The headlines are: " + "1. " + x[0][ "title"] + "." + " 2. " + x[1]["title"] + "." + " 3. " + x[2][ "title"] + "." + " 4. " + x[3]["title"] + "." + " 5. " + x[ 4]["title"] + "." res = makeWebhookResult(newp) return res elif y == 4: r = requests.get( 'https://newsapi.org/v1/articles?source=bbc-sport&sortBy=top&apiKey=1412588264c447da83a7c75f1749d6e8' ) j = r.json() x = j.get('articles') newp = "The headlines from bbc sports: " + "1. " + x[0][ "title"] + "." + " 2. " + x[1]["title"] + "." + " 3. " + x[2][ "title"] + "." + " 4. " + x[3]["title"] + "." + " 5. " + x[ 4]["title"] + "." res = makeWebhookResult(newp) return res elif y == 5: r = requests.get( 'https://newsapi.org/v1/articles?source=ars-technica&sortBy=latest&apiKey=1412588264c447da83a7c75f1749d6e8' ) j = r.json() x = j.get('articles') newp = "The headlines are: " + "1. " + x[0][ "title"] + "." + " 2. " + x[1]["title"] + "." + " 3. " + x[2][ "title"] + "." + " 4. " + x[3]["title"] + "." + " 5. " + x[ 4]["title"] + "." res = makeWebhookResult(newp) return res elif y == 6: r = requests.get( 'https://newsapi.org/v1/articles?source=the-hindu&sortBy=latest&apiKey=1412588264c447da83a7c75f1749d6e8' ) j = r.json() x = j.get('articles') newp = "The headlines are: " + "1. " + x[0][ "title"] + "." + " 2. " + x[1]["title"] + "." + " 3. " + x[2][ "title"] + "." + " 4. " + x[3]["title"] + "." + " 5. " + x[ 4]["title"] + "." res = makeWebhookResult(newp) return res #for wikipedia elif req.get("result").get("action") == "wiki": param = req.get("result").get("parameters").get("any") fin = wikipedia.summary(param, sentences=2) res = makeWebhookResult(fin) return res #for local time elif req.get("result").get("action") == "time": app_id = "4393W5-W6E838H957" client = wolframalpha.Client(app_id) john = client.query("time in bangalore") answer = next(john.results).text res = makeWebhookResult(answer) return res #for weather (yahoo api) elif req.get("result").get("action") == "yahooWeatherForecast": baseurl = "https://query.yahooapis.com/v1/public/yql?" yql_query = makeYqlQuery(req) if yql_query is None: return {} yql_url = baseurl + urllib.urlencode({'q': yql_query}) + "&format=json" result = urllib.urlopen(yql_url).read() data = json.loads(result) res = makeWebhookResult1(data) return res #for dictionary else: dictionary = PyDictionary() ch = req.get('result').get('parameters').get('word') test = req.get('result').get('parameters').get('dictionary') if test == 'antonym': res = dictionary.antonym(ch) try: try: answer = "Antonym for the word " + ch + " are: {0}, {1}, {2}, {3}, {4}.".format( res[0], res[1], res[2], res[3], res[4]) except: try: answer = "Antonym for the word " + ch + " are: {0}, {1}, {2}, {3}.".format( res[0], res[1], res[2], res[3]) except: try: answer = "Antonym for the word " + ch + " are: {0}, {1}, {2}.".format( res[0], res[1], res[2]) except: answer = "Antonym for the word " + ch + " are: {0}, {1}.".format( res[0], res[1]) except: answer = "There is no antonym for this word" return makeWebhookResult(answer) elif test == 'definition': re1s = dictionary.meaning(ch) try: try: answer = "The word {0} is a verb and its meaning is {1}".format( ch, re1s['Verb']) except: try: answer = "The word {0} is a noun and its meaning is {1}".format( ch, re1s['Noun']) except: answer = "The word {0} is an adjective and its meaning is {1}".format( ch, re1s['Adjective']) except: answer = re1s return makeWebhookResult(answer) elif test == 'synonym': res = dictionary.synonym(ch) try: try: answer = "Synonym for the word " + ch + " are: {0}, {1}, {2}, {3}, {4}.".format( res[0], res[1], res[2], res[3], res[4]) except: try: answer = "Synonym for the word " + ch + " are: {0}, {1}, {2}, {3}.".format( res[0], res[1], res[2], res[3]) except: try: answer = "Synonym for the word " + ch + " are: {0}, {1}, {2}.".format( res[0], res[1], res[2]) except: answer = "Synonym for the word " + ch + " are: {0}, {1}.".format( res[0], res[1]) return makeWebhookResult(answer) except: answer = "There is no Synonym for this word" return makeWebhookResult(answer)
#wikipedia search library import wikipedia from googleapi import google #covid19 data providing library import COVID19Py #text to speech library import pyttsx3 from PyDictionary import PyDictionary from googleapi import google # Obtain audio from the microphone recognizer = speech_recognition.Recognizer() #text to speech line engine = pyttsx3.init() voices = engine.getProperty('voices') dictionary = PyDictionary() engine.setProperty("rate", 150) engine.setProperty("volume", 1) engine.setProperty("voice", voices[1].id) print("What do you want to do?") print("A.Type") print("B.Speak(beta version)") option = input("Type here\n") #check whether to use voice recognition def choice_of_voice_recognition(option): if option.lower() == "b": print("Ask") sentence = voice_recognition() print(sentence)
#!/usr/bin/env python from SearchResult import SearchResult from PyDictionary import PyDictionary thesaurus = PyDictionary() terms = [ 'red', 'blue', 'yellow' ] searchResults = [] for term in terms: searchResults.append(SearchResult(term)) book = open('sense-and-sensibility.txt') for line in book: for term in searchResults: if term.term in line: term.increment() for synonym in term.synonyms: if synonym in line: term.increment(synonym) for term in searchResults: print('There are {} occurrences of synonyms of {} (using {} as synonyms)'.format(term.count, term.term, term.synonyms))
from tkinter import * #tkinter =) from tkinter.font import Font #import module to customize font from PIL import ImageTk, Image #import modules to import image from PyDictionary import PyDictionary #import the dictionary import requests #import this to request the webpage from bs4 import BeautifulSoup #to get data from the requested webpage global dic #globalize the dictionary data dic = PyDictionary() #get the dictionary data def init(): #initialize the app #setup window global root #globalize the root root = Tk() #create main window root.config(bg='white') #set the bg color to a nice grey color root.title('Python Dictionary GUI') #give the window a title root.geometry('500x350') #resize the window root.iconbitmap('icon.ico') #give the window a nice icon root.resizable(False, False) #make the window unresizable #setup font global titlefont #globalize the title font style global inputfont #globalize the input font style global buttonfont #globalize the button font style global nafont #globalize the 'not available' label font global meaningfont #globalize the meaning box font global tabfont #globalize the font for the tab menu bar titlefont = Font(size=30, family='Bahnschrift') #setup title style nafont = Font(size=20, family='Bahnschrift Light'
def webVersion(self): # for website only dictionary = PyDictionary() f = open('helloworld.html', 'w') f.write("""<!DOCTYPE html> <html> <head lang="en"> <title>Word Cloud Chart</title> <style> .word{ position: relative; display: inline-block; border-bottom: 1px dotted black; } .word .tooltiptext { visibility: hidden; width: 300px; background-color: black; color: #fff; text-align: center; border-radius: 6px; padding: 5px 0; font-size: 10px; /* Position the tooltip */ position: absolute; z-index: 1; } .word:hover .tooltiptext { visibility: visible; } html, body, #container { text-align: center; vertical-align: middle; font-family: arial; background-color:black; width: 100%; border:1px solid black; height: 100%; margin: 0; padding: 0; } html, body, #container, #rest{ font-size: 20px; float:left; } html, body, #container, #small{ margin : 3px; font-size: 40px; float:left; } html, body, #container, #medium{ margin : 5px; font-size: 60px; float:left; } html, body, #container, #big{ margin : 7px; font-size: 80px; float:left; } html, body, #container, #huge{ margin : 10px; font-size: 100px; float:left; } </style> </head> <body> <div id="container" > """) for word in self.words[:50]: colour_code = str(int(random.random() * 255)) colour_code_1 = str(int(random.random() * 255)) colour_code_2 = str(int(random.random() * 255)) if word[1] == 2: f.write( """<div class="word" style="color:rgb(""" + colour_code + "," + colour_code_1 + "," + colour_code_2 + """);" id="small"><span class="tooltiptext">Frequency:""" + str(word[1]) + """<p>Meaning:""" + str(dictionary.meaning(word[0])) + """</p></span>""" + word[0] + """</div>""") elif word[1] >= 5: f.write( """<div class="word" style="color:rgb(""" + colour_code + "," + colour_code_1 + "," + colour_code_2 + """);" id="huge"><span class="tooltiptext">Frequency:""" + str(word[1]) + """<p>Meaning:""" + str(dictionary.meaning(word[0])) + """</p></span>""" + word[0] + """</div>""") elif word[1] >= 4: f.write( """<div class="word" style="color:rgb(""" + colour_code + "," + colour_code_1 + "," + colour_code_2 + """);" id="big"><span class="tooltiptext">Frequency:""" + str(word[1]) + """<p>Meaning:""" + str(dictionary.meaning(word[0])) + """</p></span>""" + word[0] + """</div>""") elif word[1] >= 3: f.write( """<div class="word" style="color:rgb(""" + colour_code + "," + colour_code_1 + "," + colour_code_2 + """);" id="medium"><span class="tooltiptext">Frequency:""" + str(word[1]) + """<p>Meaning:""" + str(dictionary.meaning(word[0])) + """</p></span>""" + word[0] + """</div>""") else: f.write( """<div class="word" style="color:rgb(""" + colour_code + "," + colour_code_1 + "," + colour_code_2 + """);" id="rest"><span class="tooltiptext">Frequency:""" + str(word[1]) + """<p>Meaning:""" + str(dictionary.meaning(word[0])) + """</p></span>""" + word[0] + """</div>""") f.write(""" </div> <script> </script> </body> </html>""") f.close() webbrowser.open_new_tab('helloworld.html') # done for website # def main(string_of_words): # # specify the name of the pdf. Ensure pdf is in same folder # path = string_of_words # # width of the picture # WIDTH = 1280 # # # height of the picture # HEIGHT = 720 # # # Number of words in the cloud # NUM_OF_WORDS = 250 # # # If you want to exclude certain words from the cloud, # # you can add them as a new line to the file stopwords.txt # # Currently stopwords.txt only contain Stop Words # # excludewords = stopwords.stopWords('stopwords.txt') # fwords = process_text(string_of_words, excludewords) # # if __name__ == '__main__': # dictionary=PyDictionary() # # # try: # # textData = GUI.fetchData() # # words = main(textData) # # except: # # pass # # print("Which of the following you want to do: \n 1) Use pdf file \n 2) Use text file \n 3) want to enter text.\n Enter your input 1 or 2 or 3: ") # # option = input("") # # if option == '1': # # path = input("give me the whole path of file : ") # # pdf_to_word = pdfparser.get_string_from_pdf(path) # # words = main(pdf_to_word) # # elif option == '2': # # path = input("give me the whole path of file : ") # # f = open(path, "r") # # txt_to_word = f.read() # # words = main(txt_to_word) # # else: # # strings = input("Enter your text :- ") # # words = main(strings) # # for website only # f = open('helloworld.html','w') # f.write("""<!DOCTYPE html> # <html> # <head lang="en"> # <title>Word Cloud Chart</title> # <style> # .word{ # position: relative; # display: inline-block; # border-bottom: 1px dotted black; # } # # .word .tooltiptext { # visibility: hidden; # width: 300px; # background-color: black; # color: #fff; # text-align: center; # border-radius: 6px; # padding: 5px 0; # font-size: 10px; # # /* Position the tooltip */ # position: absolute; # z-index: 1; # } # # .word:hover .tooltiptext { # visibility: visible; # } # # html, body, #container { # text-align: center; # vertical-align: middle; # font-family: arial; # background-color:black; # width: 100%; # border:1px solid black; # height: 100%; # margin: 0; # padding: 0; # } # html, body, #container, #rest{ # transform : rotate(90); # font-size: 20px; # float:left; # } # html, body, #container, #small{ # font-size: 40px; # float:left; # } # html, body, #container, #medium{ # transform : rotate(90); # font-size: 60px; # float:left; # } # html, body, #container, #big{ # font-size: 80px; # float:left; # } # html, body, #container, #huge{ # transform : rotate(90); # font-size: 100px; # float:left; # } # </style> # </head> # <body> # <div id="container" > # """) # # for word in fwords[:50]: # colour_code = str(int(random.random() * 255)) # colour_code_1 = str(int(random.random() * 255)) # colour_code_2 = str(int(random.random() * 255)) # if word[1]==2: # f.write("""<div class="word" style="color:rgb(""" +colour_code+","+colour_code_1+","+colour_code_2+""");" id="small"><span class="tooltiptext">Frequency:"""+str(word[1])+"""<p>Meaning:"""+str(dictionary.meaning(word[0]))+"""</p></span>"""+word[0]+"""</div>""") # elif word[1]>=5: # f.write("""<div class="word" style="color:rgb(""" +colour_code+","+colour_code_1+","+colour_code_2+""");" id="huge"><span class="tooltiptext">Frequency:"""+str(word[1])+"""<p>Meaning:"""+str(dictionary.meaning(word[0]))+"""</p></span>"""+word[0]+"""</div>""") # elif word[1]>=4: # f.write("""<div class="word" style="color:rgb(""" +colour_code+","+colour_code_1+","+colour_code_2+""");" id="big"><span class="tooltiptext">Frequency:"""+str(word[1])+"""<p>Meaning:"""+str(dictionary.meaning(word[0]))+"""</p></span>"""+word[0]+"""</div>""") # elif word[1]>=3: # f.write("""<div class="word" style="color:rgb(""" +colour_code+","+colour_code_1+","+colour_code_2+""");" id="medium"><span class="tooltiptext">Frequency:"""+str(word[1])+"""<p>Meaning:"""+str(dictionary.meaning(word[0]))+"""</p></span>"""+word[0]+"""</div>""") # else: # f.write("""<div class="word" style="color:rgb(""" +colour_code+","+colour_code_1+","+colour_code_2+""");" id="rest"><span class="tooltiptext">Frequency:"""+str(word[1])+"""<p>Meaning:"""+str(dictionary.meaning(word[0]))+"""</p></span>"""+word[0]+"""</div>""") # f.write(""" # </div> # <script> # # </script> # </body> # </html>""") # f.close() # webbrowser.open_new_tab('helloworld.html') # # done for website