class WindowHelper: def __init__(self, plugin, window): self._window = window self._plugin = plugin self._entry = None self._accel = gtk.AccelGroup() self._accel.connect_group(gtk.keysyms.C, gtk.gdk.SUPER_MASK, 0, self._do_command) self._window.add_accel_group(self._accel) def deactivate(self): self._window.remove_accel_group(self._accel) self._window = None self._plugin = None def update_ui(self): pass def _do_command(self, group, obj, keyval, mod): view = self._window.get_active_view() if not view: return False if not self._entry: self._entry = Entry(self._window.get_active_view()) self._entry.connect('destroy', self.on_entry_destroy) self._entry.grab_focus() return True def on_entry_destroy(self, widget): self._entry = None
def test_simple_as_tsv(): entry = Entry() entry.a = "A value" assert entry.tsv == ( 'a\n' 'A value\n' )
def test_set_of_tags_from_primitive(): entry = Entry() entry.primitive = { 'fields': ['a', 'b', 'c'], 'name': 'foo'} assert entry.name == 'foo' assert entry.fields == ['a', 'b', 'c']
def __init__(self, section, etype, node): Entry.__init__(self, section, etype, node) self.return_invalid_entry = fdt_util.GetBool(self._node, 'return-invalid-entry') self.return_unknown_contents = fdt_util.GetBool(self._node, 'return-unknown-contents') self.bad_update_contents = fdt_util.GetBool(self._node, 'bad-update-contents') self.return_contents_once = fdt_util.GetBool(self._node, 'return-contents-once') # Set to True when the entry is ready to process the FDT. self.process_fdt_ready = False self.never_complete_process_fdt = fdt_util.GetBool(self._node, 'never-complete-process-fdt') self.require_args = fdt_util.GetBool(self._node, 'require-args') # This should be picked up by GetEntryArgsOrProps() self.test_existing_prop = 'existing' self.force_bad_datatype = fdt_util.GetBool(self._node, 'force-bad-datatype') (self.test_str_fdt, self.test_str_arg, self.test_int_fdt, self.test_int_arg, existing) = self.GetEntryArgsOrProps([ EntryArg('test-str-fdt', str), EntryArg('test-str-arg', str), EntryArg('test-int-fdt', int), EntryArg('test-int-arg', int), EntryArg('test-existing-prop', str)], self.require_args) if self.force_bad_datatype: self.GetEntryArgsOrProps([EntryArg('test-bad-datatype-arg', bool)]) self.return_contents = True
def test_empty_entry_from_json_with_whitespace(): entry = Entry() entry.json = ('{}') assert entry.json == ('{}') entry.json = (' { }') assert entry.json == ('{}')
def read_existing_entry(parameters): # some option to read last or read random entry_id, writeable_param = peel(parameters) writeable = False if not re.match('^w$', writeable_param) else True e = Entry(entry_id, writeable = writeable) e.edit_entry() return None
def __init__(self, section, etype, node): Entry.__init__(self, section, etype, node) self.text_label, = self.GetEntryArgsOrProps( [EntryArg('text-label', str)]) self.value, = self.GetEntryArgsOrProps([EntryArg(self.text_label, str)]) if not self.value: self.Raise("No value provided for text label '%s'" % self.text_label)
def test_get_entry_json_by_hash(): entry = Entry() entry.primitive = {"id": 123, "someField": "thevalue"} response = app.get('/hash/%s.json' % entry.hash, base_url=field_url) assert response.status_code == 200 data = json.loads(response.data.decode('utf-8')) assert data == {"hash": entry.hash, "entry": entry.primitive}
def testBindToFunction(self): def thatWhichMustBeCalled(newValue): thatWhichMustBeCalled.called = True thatWhichMustBeCalled.called = False e = Entry() e.bindOnChange(thatWhichMustBeCalled) e.set('Hello World!') self.assertTrue(thatWhichMustBeCalled.called)
def parse(self, csvLines, csvHeader): self.entries = [Entry(csvLines.pop(0), csvHeader)] while (len(csvLines) > 0): nextEntry = Entry(csvLines[0], csvHeader) if nextEntry.getType() == "Root": break self.entries.append(nextEntry) csvLines.pop(0) self.postValidate()
def test_set_of_tags_from_txt(): entry = Entry() entry.txt = ('fields:\n' ' - b\n' ' - c\n' ' - z\n' 'name: foo\n') assert entry.name == "foo" assert entry.fields == ['b', 'c', 'z']
def test_get_entry_yaml_by_hash(): entry = Entry() entry.primitive = {'field': 'value'} register.put(entry) response = app.get('/hash/%s.yaml' % entry.hash, base_url=field_url) assert response.status_code == 200 data = response.data.decode('utf-8') assert data == "field: value\n"
def testDelegate(self): class Delegate(object): def onChange(self, newValue): self.called = True e = Entry() e.delegate = Delegate() e.delegate.called = False e.set('Hello World!') self.assertTrue(e.delegate.called)
def test_update_entry_correct_priority(self): ent = Entry('a','apple',5) time.sleep(0.001) ent2 = Entry('b','cat',3) self.lruc4._add_entry(ent) self.lruc4._add_entry(ent2) time.sleep(0.001) ent.touch() self.lruc4._update_entry(ent) smallest = heapq.heappop(self.lruc4._ordering) assert smallest == ent2
def testBindToMethod(self): class MethodHolder(object): def methodThatMustBeCalled(self, newValue): self.called = True e = Entry() methodHolder = MethodHolder() methodHolder.called = False e.bindOnChange(methodHolder.methodThatMustBeCalled) e.set('Hello World!') self.assertTrue(methodHolder.called)
def test_postaladdress_from_tsv(): entry = Entry() entry.tsv = ( 'addressCountry\taddressLocality\taddressRegion\tpostcode\tstreetAddress\n' # NOQA 'GB\tHolborn\tLondon\tWC2B 6NH\tAviation House, 125 Kingsway\n' # NOQA ) assert entry.streetAddress == "Aviation House, 125 Kingsway" assert entry.addressLocality == "Holborn" assert entry.addressRegion == "London" assert entry.postcode == "WC2B 6NH" assert entry.addressCountry == "GB"
def test_trim_csv_headings(): entry = Entry() entry.csv = ( '" addressCountry","addressLocality "," addressRegion","postcode"," streetAddress"\r\n' # NOQA '"GB","Holborn","London","WC2B 6NH","Aviation House, 125 Kingsway"\r\n' # NOQA ) assert entry.streetAddress == "Aviation House, 125 Kingsway" assert entry.addressLocality == "Holborn" assert entry.addressRegion == "London" assert entry.postcode == "WC2B 6NH" assert entry.addressCountry == "GB"
def test_postaladdress_from_json(): entry = Entry() entry.json = ('{"addressCountry":"GB",' '"addressLocality":"Holborn",' '"addressRegion":"London",' '"postcode":"WC2B 6NH",' '"streetAddress":"Aviation House, 125 Kingsway"}') assert entry.streetAddress == "Aviation House, 125 Kingsway" assert entry.addressLocality == "Holborn" assert entry.addressRegion == "London" assert entry.postcode == "WC2B 6NH" assert entry.addressCountry == "GB"
def create_new_entry(parameters): title = parameters e = Entry(title = title) e.edit_entry() if (title == ""): print "\nEnter title:" e.update_title(raw_input("#: >> ")) print "\nTitle set.\n" return None
def __init__(self, section, etype, node): Entry.__init__(self, section, etype, node) self.content = fdt_util.GetPhandleList(self._node, 'content') if not self.content: self.Raise("Vblock must have a 'content' property") (self.keydir, self.keyblock, self.signprivate, self.version, self.kernelkey, self.preamble_flags) = self.GetEntryArgsOrProps([ EntryArg('keydir', str), EntryArg('keyblock', str), EntryArg('signprivate', str), EntryArg('version', int), EntryArg('kernelkey', str), EntryArg('preamble-flags', int)])
def test_postaladdress_from_txt(): entry = Entry() entry.txt = ('addressCountry: GB\n' 'addressLocality: Holborn\n' 'addressRegion: London\n' 'postcode: WC2B 6NH\n' 'streetAddress: Aviation House, 125 Kingsway\n') assert entry.streetAddress == "Aviation House, 125 Kingsway" assert entry.addressLocality == "Holborn" assert entry.addressRegion == "London" assert entry.postcode == "WC2B 6NH" assert entry.addressCountry == "GB"
def setup(): collections = db.collection_names() if 'testing' not in collections: db.create_collection('testing') entry = Entry() entry.primitive = {"id": 123, "someField": "thevalue"} register.put(entry) entry.primitive = {"id": 678, "someField": "thevalue"} register.put(entry) entry.primitive = {"id": 234, "otherField": "another value"} register.put(entry)
def __init__(self, index, parent, initials=None, name=None, index_as=None, sort_as=None, meta=None, alias=None, **rest): for attribute in self._generated_fields: setattr(self, attribute, dict.fromkeys([Entry.INFLECTION_NONE,])) Entry.__init__(self, index, parent, meta) # If this entry is an alias for another entry, indicate that # here. self.alias = alias names = escape_aware_split(name, ',') if len(names) > 1: last_name, first_name = names else: last_name, first_name = names[0], '' # Remove any preceding commas (',') and surrounding # whitespace. first_name = first_name.lstrip(',').strip() field_variable_map = [ ('reference', 'reference'), ('typeset_in_text_first', 'first_name'), ('typeset_in_text_last', 'last_name'), ('typeset_in_index', 'typeset_formal'),] # Prepare the different local variables. reference = initials if first_name: typeset_formal = '%s, %s' % (last_name, first_name) else: typeset_formal = last_name for particle in ['the', 'von', 'van', 'van der']: if first_name.endswith(particle): last_name = ' '.join([first_name[-len(particle):], last_name]) first_name = first_name[:-len(particle)] break # Will avoid the following else-clause. variables = locals() for field, variable in field_variable_map: getattr(self, field)[Entry.INFLECTION_NONE] = variables[variable] self.index_inflection = Entry.INFLECTION_NONE
def test_postaladdress_from_primitive(): entry = Entry() entry.primitive = { 'addressCountry': 'GB', 'addressLocality': 'Holborn', 'addressRegion': 'London', 'postcode': 'WC2B 6NH', 'streetAddress': 'Aviation House, 125 Kingsway'} assert entry.streetAddress == "Aviation House, 125 Kingsway" assert entry.addressLocality == "Holborn" assert entry.addressRegion == "London" assert entry.postcode == "WC2B 6NH" assert entry.addressCountry == "GB"
def __init__(self, section, etype, node): Entry.__init__(self, section, etype, node) self.hardware_id, self.keydir, self.bmpblk = self.GetEntryArgsOrProps( [EntryArg('hardware-id', str), EntryArg('keydir', str), EntryArg('bmpblk', str)]) # Read in the GBB flags from the config self.gbb_flags = 0 flags_node = node.FindNode('flags') if flags_node: for flag, value in gbb_flag_properties.iteritems(): if fdt_util.GetBool(flags_node, flag): self.gbb_flags |= value
def testEqual(self): entry = Entry() entry.title = self.saveentry.title entry.updated = self.saveentry.updated entry.identity = self.saveentry.identity entry.content = self.saveentry.content entry.read = self.saveentry.read entry.url = self.saveentry.url entry.important = self.saveentry.important entry.author = self.saveentry.author self.assertEqual(self.saveentry, entry)
def do_add(args): """CLI func to call when doing subtask "add". """ # Three kinds of cases here in a mess: # 1. movie name provided (possibly other info) (go batch) # 2. movie name not provided, IMDB provided (go batch) # 3. neither is provided, go interactive # TODO refactoring needed. Replace functions and so on. db = data.DataFacilities(dbfile=args.db_file) if args.movie: # 1. batch newflick = Entry(args.movie, args.rating, args.imdb, args.message or "") newflick.imdb = imdbutils.just_work_dammit(newflick, skip=args.skip_imdb) elif args.imdb: # 2. batch: No movie name given but something of an IMDB id if args.skip_imdb: print("Can't do this without querying IMDB!") return try: moviename = imdbutils.query_imdb_name(args.imdb) clean_id = imdbutils.clean_imdb_id(args.imdb) except imdbutils.BadIMDBIdException: print("Malformed IMDB id, aborting.") return except imdbutils.NoIMDBpyException: print("Can't do this without querying IMDB! (No IMDBpy)") return newflick = Entry(moviename, imdb=clean_id, rating=args.rating, message=args.message or "") else: # 3. interactive try: newflick = edit_entry.edit_data_interactive(None, skip_imdb=args.skip_imdb) if not args.skip_imdb: from textwrap import fill #triv = imdbutils.query_random_trivia(newflick.imdb) #if triv: print 'TRIVIA:',fill(triv ) except edit_entry.UserCancel: print("Empty name, exiting...") return 0 if args.debug: print(newflick.__dict__) else: db.store_entry(newflick)
def create_entry_from_mail(self, mail_message): plaintext_bodies = mail_message.bodies('text/plain') html_bodies = mail_message.bodies('text/html') msg = None for content_type, body in itertools.chain(plaintext_bodies, html_bodies): msg = body.decode() if msg: break # Hash the message content, and hour so we won't have the same message within the hour guid = hashlib.sha256(msg + datetime.now().strftime('%Y%m%d%H')).hexdigest() entry = Entry(title=mail_message.subject, summary=msg, guid=guid, parent=self.key) yield entry.put_async() raise ndb.Return(entry)
def export(self, csvFile, depth=0): """Export this event to a csv file as part of the export procedure. \ csvFile must be a file opened with w permissions. <depth> empty columns \ are added to the beginning to serve as indentation""" leadingCommas = '' for _ in range(depth): leadingCommas = leadingCommas+',' s = '{indent}{number},"{name}","Total Time: {time}"\n'.format( indent=leadingCommas, number=self.classNumber, name=self.className, time=self.totalTime ) csvFile.write(s) s = '{indent},{header}\n'.format( indent=leadingCommas, header=Entry.getCsvHeader() ) csvFile.write(s) for e in self.entries: e.export(csvFile, depth+1)
def get(self): # get all the entries catalogue_name = self.request.get('catalogue_name', datastore.DEFAULT_CATALOGUE_NAME) catalogue_query = Entry.query( ancestor=datastore.create_entry_key(catalogue_name)).order(-Entry.date) entries = catalogue_query.fetch() # wrap feed around the entries xml = '''\ <feed xmlns="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/"> ''' for entry in entries: xml = xml + entry.mytoXML(self.request.scheme + '://' + self.request.host) xml = xml + '</feed>' # create etree xml doc #f = StringIO(xml) doc = etree.XML(xml) # create transformer from xsl xsl_file = './styles/bloglist.xsl' xsl_root = etree.parse(xsl_file) transformer = etree.XSLT(xsl_root) result_tree = transformer(doc) self.response.write(str(result_tree))
def set_labs(self, bundle, prefix=None): """Attaches the patient's lab results to the bundle""" if GENERATION_MAP["LabResults"] and self.pid in Lab.results: for o in Lab.results[self.pid]: pid = self.pid # if prefix: # pid = prefix + "-" + pid _json = o.toJSON() _json["id"] = uid(None, "%s-lab" % o.id, prefix) _json["pid"] = pid _json["categoryCode"] = "laboratory" _json["categoryDisplay"] = "Laboratory" # print _json self.appendEntry(bundle, Entry(Observation(_json, prefix))) return bundle
def range_of_dates(self, start_date, end_date): """Finds entry from a ranged date search and returns a list of matching objects.""" start_date = datetime.strptime(start_date, "%m/%d/%Y") end_date = datetime.strptime(end_date, "%m/%d/%Y") entries_found = [] for data in self.entries: data_date = datetime.strptime(data['Date'], "%m/%d/%Y") if start_date <= data_date <= end_date: entries_found.append(Entry.from_dict(data)) return entries_found
def parseEntry(debe, nextDebe, printer): bs = fetch(BASE_URL + debe.link) if bs is not None: try: entry = Entry() entry.id = debe.id entry.nextId = nextDebe.id entry.title = debe.title entry.link = BASE_URL + debe.link entry.date = bs.find("a", {"class": "entry-date"}).text entry.author = bs.find("a", {"class": "entry-author"}).text entry.authorLink = BIRI + "/" + entry.author.replace(" ", "-") entry.content = bs.find("div", { "class": "content" }).decode_contents().replace('href="/?q=', 'href="' + BASE_URL + '/?q=').replace( 'href="/entry', 'href="' + BASE_URL + '/entry') return printer.printEntry(entry) except Exception as e: print(e)
def get_entries(contents: str) -> Generator: for match in bib_entry.finditer(contents): # start and end of match start, end = match.start(), match.end() num_parens = 1 index = end + 1 while num_parens > 0: char = contents[index] if char == '{': num_parens += 1 elif char == '}': num_parens -= 1 index += 1 yield Entry(start, index, contents[start:index])
def load_entries(self): """load entries if they exist from csv file""" if os.path.exists("./entries.csv"): with open("entries.csv", "r") as csv_file: entry_reader = csv.reader(csv_file, delimiter=',') for row in entry_reader: if row[0] == "task_name": continue else: self.e = Entry(row[0], row[1], row[2], row[3]) self.entries.append(self.e) else: # just continue the program print("We found no history of any csv file. " "We will create a new one.")
def populate_from_spreadsheet(self, input_file_name): """ Grab voter data from the given spreadsheet (prepared by create_spreadsheet_from_voter_dictionary in create-voter-spreadsheet.py) and populate the Contest with the relevant Voters and Entries. """ if self.verbose: print(f"Populating contest with voter data from {input_file_name}...", end="", flush=True) with open(input_file_name, "r", newline="") as spreadsheet: reader = csv.reader(spreadsheet, delimiter=",") header = next(reader) entry_names = header[1:] # If there n Entries, each voter can assign at most n distinct rankings. # (Really the number of Entries may be more than the number of distinct rankings. # For example, a poll could ask users to vote for the top 3 Entries out of 10. # It's fine to overestimate the number of distinct rankings though. It'll just lead to # some wasted space in each Voter, which doesn't really matter.) num_distinct_rankings = len(entry_names) # construct Entries for entry_name in entry_names: # self.entries[i] contains the entry from column i+1 # (not column i because the leftmost column contains user info, not entry info) self.entries.append(Entry(entry_name)) # construct Voters and record their votes for row in reader: voter_name = row[0] # voter_rankings[i] contains the voter's ranking for entry self.entries[i] voter_rankings = row[1:] voter = Voter(voter_name, num_distinct_rankings) for i, ranking in enumerate(voter_rankings): if ranking: # the ranks are stored in user_rankings as a list of strings, so cast them # to ints for use as indexes voter.rank(self.entries[i], int(ranking)) self.voters.append(voter) if self.verbose: print(" done.")
def test_initialize_possible_values(self): # Given 9 Entries with starting values entries = [ Entry(value=1, row=0, column=0), Entry(value=2, row=0, column=1), Entry(value=3, row=0, column=2), Entry(value=0, row=0, column=3), Entry(value=0, row=0, column=4), Entry(value=0, row=0, column=5), Entry(value=7, row=0, column=6), Entry(value=8, row=0, column=7), Entry(value=9, row=0, column=8), ] group = Group(GroupType.Row, group_number=0) group.entries = entries group.initialize_possible_values() for entry in group.entries: if entry.value: self.assertEqual(entry.possible_values[GroupType.Row], []) else: self.assertEqual(entry.possible_values[GroupType.Row], [4, 5, 6])
def load_leaf_data(workspace): file = open(workspace + leaf_data_path, "r") text = file.read() temp_lines = text.splitlines() entries = [] for line in temp_lines: line = line.strip(' ') tokens = line.split() if len(tokens) > 1: c = tokens[0] variables = tokens[1:len(tokens)] e = Entry(variables, c) entries.append(e) file.close() return entries
def get_processed_clean_entries(self, file_name=""): entries = [] c = self.conn.cursor() if file_name != "": c.execute('''SELECT file_name, patient_index, index_in_array, cell_type FROM cells where file_name=? AND (cell_type=? OR cell_type=? OR cell_type=? OR cell_type=? OR cell_type=? OR cell_type=? OR cell_type=? OR cell_type=?)''', (file_name, constants.NEUTROPHIL, constants.LYMPHOCYTE, constants.MONOCYTE, constants.EOSINOPHIL, constants.BASOPHIL, constants.STRANGE_EOSINOPHIL, constants.STRANGE_EOSINOPHIL, constants.NO_CELL)) else: c.execute('''SELECT file_name, patient_index, index_in_array, cell_type FROM cells where (cell_type=? OR cell_type=? OR cell_type=? OR cell_type=? OR cell_type=? OR cell_type=? OR cell_type=? OR cell_type=?)''', (constants.NEUTROPHIL, constants.LYMPHOCYTE, constants.MONOCYTE, constants.EOSINOPHIL, constants.BASOPHIL, constants.STRANGE_EOSINOPHIL, constants.STRANGE_EOSINOPHIL, constants.NO_CELL)) rows = c.fetchall() for row in rows: entries.append(Entry(row[0], row[1], row[2], row[3], 0, 0, 0, 1, 0)) return entries
def create_new_entry(parameters): title = parameters e = Entry(title=title) e.edit_entry() if (title == ""): print "\nEnter title:" e.update_title(raw_input("#: >> ")) print "\nTitle set.\n" return None
def time_search_matches(entries): times = [] for entry in entries: time = entry.minutes if time not in times: times.append(time) if len(times) > 1: while True: clear() print("These times match your search") for time in times: print(time) minutes = input( "\nWhich time would you like to search?").strip() if minutes in times: entries = Entry.select().order_by( Entry.minutes.desc()).where(Entry.minutes == minutes) return entries
def keyword_matches(entries): keywords = [] for entry in entries: keyword = entry.notes.strip() if keyword not in keywords: keywords.append(name) if len(keywords) > 1: while True: clear() print("These entries match your keywords") for keyword in keywords: print(keyword) notes = input( "\nWhich keyword would you like to search").strip() if notes in keywords: entries = Entry.select().order_by( Entry.date.desc()).where(Entry.notes == notes) return entries
def view_entries(): """View previous entries""" if Entry.select(): menu_range = (1, len(SEARCH_MENU) + 1) quit_num = menu_range[1] user_input = None while user_input != quit_num: print_title(view_entries.__doc__) print_options(SEARCH_MENU, docstring=True) print(f"\n{quit_num}) Return to Main Menu") user_input = validate(get_input(), int, menu_range) if user_input in SEARCH_MENU: search_results = SEARCH_MENU[user_input]() return print_entries(search_results) else: return print_error("No previous entries. Please add an entry first.")
def SetImagePos(self, image_pos): """Override this function to set all the entry properties from CBFS We can only do this once image_pos is known Args: image_pos: Position of this entry in the image """ Entry.SetImagePos(self, image_pos) # Now update the entries with info from the CBFS entries for entry in self._cbfs_entries.values(): cfile = entry._cbfs_file entry.size = cfile.data_len entry.offset = cfile.calced_cbfs_offset entry.image_pos = self.image_pos + entry.offset if entry._cbfs_compress: entry.uncomp_size = cfile.memlen
def __init__(self, entries: List[List[int]]): self.solved = False self.rows: List[Group] = [] self.columns: List[Group] = [] self.squares: List[Group] = [] for iter in range(0, 9): self.rows.append(Group(GroupType.Row, iter)) self.columns.append(Group(GroupType.Column, iter)) self.squares.append(Group(GroupType.Square, iter)) self.entries: List[List[Entry]] = [] # First convert all the entries to objects for row_number, row in enumerate(entries): row_entries = [] for column_number, entry_value in enumerate(row): row_entries.append( Entry(entry_value, row_number, column_number)) self.entries.append(row_entries) self._create_groups()
def lru_cache_file(self, fname, size): if fname in self.status: self.lru_touch_file(fname) return None, None, status.LRU_UPDATED if self.unpinned_space < size: # we cannot evict from this return None, None, status.NO_SPACE_LEFT # free up some space evicted = None if self.free_space < size: # evict some to free up some space evicted = self.lru_evict(size - self.free_space) e = Entry(name=fname, size=size) self.status[fname] = e self.addto_used_space(size) self.lru_set_head(e) return e, evicted, status.SUCCESS
def test_loadUnload(self): #Test unloading our default plugin self.vfs.unloadDataSource("TestPlugin") self.assertEquals(0, len(self.vfs.dataSources)) #Test loading a random object with self.assertRaises(ValueError): self.vfs.loadDataSource("I'm not a plugin!") #Test loading our old plugin self.vfs.loadDataSource(self.plugin) self.assertEquals(1, len(self.vfs.dataSources)) self.assertEquals(self.plugin, self.vfs.dataSources["TestPlugin"]) #Load one more plugin p = testPlugin() p.name = "TotallyDifferentPlugin" p.tree = Entry("/"+p.name, "object.container.storageFolder", OneServerManager().rootEntry, [], p.name, "", -1, None) self.vfs.loadDataSource(p) self.assertEquals(2, len(self.vfs.dataSources)) self.assertEquals(p, self.vfs.dataSources["TotallyDifferentPlugin"])
def find_by_date_range(): """Find by date range""" beg_date = None end_date = None while not beg_date: print_title(find_by_date_range.__doc__) print("Begin date (mm/dd/yyyyy):") beg_date = validate(get_input(), datetime) while not end_date: print_title(find_by_date_range.__doc__) print("End date (mm/dd/yyyy):") end_date = validate(get_input(), datetime) results = Entry.select().where( Entry.date.between(beg_date, end_date)) return results
def load_iris_data(workspace): file = open(workspace + iris_data_path, "r") text = file.read() temp_lines = text.splitlines() entries = [] for line in temp_lines: line = line.strip(' ') tokens = line.split() if len(tokens) > 1: c = tokens[-1] variables = tokens[0:len(tokens) - 1] e = Entry(variables, c) entries.append(e) file.close() print("files loaded.\n") return entries
def find_by_date(): """Find by date of entry""" entries = Entry.select() dates = set([entry.timestamp for entry in entries]) clear_screen() print("Dates to choose from:") for date in dates: print(date.strftime('%m/%d/%Y')) print("Enter a date (MM/DD/YYYY)") print("Enter q to go back") date = get_date() if date is None: return matched_entries = entries.where(Entry.timestamp.year == date.year, Entry.timestamp.month == date.month, Entry.timestamp.day == date.day) browse_through(matched_entries) return matched_entries
def create(): name = subdomain(request) register = registers.get(name) if not register: register = Register(name.capitalize(), current_app.config['MONGO_URI']) registers[name] = register search = "http://register.openregister.org/search.json" url = "%s?field=register&value=%s" % (search, name) if request.method == 'GET': # fetch form fields from register register for register type - ouch # e.g. http://register.openregister.org/search. # json?field=register&value=court and use entry.fields # however i think best that we persist this type of # metadata with register on intialisation resp = requests.get(url) fields = resp.json()[0]['entry']['fields'] return render_template('create.html', register=register, fields=fields) else: try: entry = Entry() entry_dict = {} if form_post(request): for val in request.form: entry_dict[val] = request.form[val] entry = Entry() entry.primitive = entry_dict register.put(entry) domain = current_app.config['REGISTER_DOMAIN'] url = 'http://%s.%s/hash/%s' % (name, domain, entry.hash) return redirect(url) elif request.headers['Content-Type'] == 'application/json': entry = Entry() entry.primitive = request.get_json()['entry'] register.put(entry) return 'OK', 201 except Exception as ex: log_traceback(current_app.logger, ex) return 'Internal Server Error', 500
def __process_element(self, lines, pos): parsed_line = findall(self.__line_reader_regex, lines[pos]) if len(parsed_line) != 1: raise SyntaxError("Bad GEDCOM syntax in line: '" + lines[pos] + "'") level, pointer, tag, value = parsed_line[0] entry = Entry(tag, pointer, value, []) level = int(level) pos += 1 while self.__line_level(lines, pos) > level: pos, child_element = self.__process_element(lines, pos) entry.children.append(child_element) return (pos, entry)
def get_entry_by_id(self, entry_id): ''' Get directory entry object (stream or storage) by it's id ''' if entry_id in self._directory: return self._directory[entry_id] sector_number = self.first_directory_sector_location current_entry = 0 while sector_number != ENDOFCHAIN and \ (current_entry + 1) * (self.sector_size / 128) <= entry_id: sector_number = self._get_next_fat_sector(sector_number) current_entry += 1 sector_position = (sector_number + 1) << self.sector_shift sector_position += (entry_id - current_entry * (self.sector_size / 128)) * 128 self._directory[entry_id] = Entry(entry_id, self, sector_position) return self._directory[entry_id]
def add_entry(): """Add an entry.""" entry = Entry() while True: clear() print_entry(name=entry.name, time_spent=entry.time_spent, notes=entry.notes, date=entry.date) choice = input('[S]ave, [D]elete or [E]dit entry? ').lower().strip() if choice == 's': save_entry(entry=entry) input('Entry saved! Press enter to go to the main menu.') break elif choice == 'e': edit_entry(entry=entry) elif choice == 'd': input('Entry deleted! Press enter to go to the main menu.') break
def getArticleList(self, listOfPostId): articleList = [] dic = {} dic = self.loadArticleData() for id in listOfPostId: for k, v in dic.iteritems(): if id == int(str(k)): entry1 = Entry(k, v) articleList.append(entry1) # article = "" # for key,value in v.iteritems(): # keyStr = str(key) # valueStr = str(value) # if keyStr == 'title': # article = article + valueStr + "NEWLINE" # if keyStr == 'text': # article = article + valueStr # articleList.append(article) return articleList
def get_all_entrys(self, language, where="TRUE"): self._query( 'SELECT COUNT(*) FROM documents WHERE (source_id IN (SELECT id FROM sources_twitter)) AND (' + where + ') AND language=\'%s\' AND _relevance=\'-1\'' % (language)) result = self.cursor.fetchone() entry_count = result[0] for offset in xrange(0, entry_count, self.ONE_ENTRY_LOOP): self._query( 'SELECT id, guid, text FROM documents WHERE (source_id IN (SELECT id FROM sources_twitter)) AND (' + where + ') AND language=\'%s\' AND _relevance=\'-1\' LIMIT %d OFFSET %d' % (language, self.ONE_ENTRY_LOOP, offset)) result = self.cursor.fetchall() for entry in result: yield Entry(id=entry[0], guid=entry[1], entry=entry[2], language=language)
def getEntry(self, file): e = self.server.get(file) if not e: file = file + ".txt" path = os.path.join(top, file) entry = open(path, 'r') url = 'article/' + file.split(".")[0] title = entry.readline() blob = entry.read() ascii = "".join([Utils().to_ascii(x) for x in blob]) blob = self.md.convert(ascii) date = Utils().getDate(os.stat(os.path.join(top, file))[ST_CTIME]) entry.close() e = Entry(date, title, url, blob) self.server.set(file.split('.')[0], e, expiry) return e
def main(): form = cgi.FieldStorage() name = form.getfirst('build') action = form.getfirst('action') if name: e = Entry.find(name) # TODO: show error message on nonsensical action if e is None: return_404(name) elif action is None: show_log(e) elif action == "kill": e.signal_kill() show_log(e) elif action == "metadata": show_meta(e) elif action == "mail": mail_to_editors(e) show_log(e) elif action == "tar": download_tar(e) elif action == "isabelle_log": isabelle_log(e) elif action == "state": print_state(e) elif action == "checks": print_checks(e) elif action == "check_afp": s = form.getfirst('status') e.signal_afp(s) redirect("303 Back to submission list", "index?action=submissions") elif action == "submissions": print_list_of_submissions() else: form_data_super = collect_form_data(form) metadata = Metadata(collect_form_per_entry(form), form_data_super['contact'], form_data_super['comment']) archive = get_archive_file(form) if metadata.validate() and archive is not None: save_file(metadata, archive) else: print_html_form(metadata)
def search_by_time_spent(): header = "Search By Employee" status_message = None while True: print_menu(header, status_message) user_choice = input('\n Enter a time spent in minutes: ') try: if int(user_choice) < 1: status_message = " You must enter a positive integer!" continue except ValueError: status_message = " Please enter a valid integer!" continue try: display_entry( Entry.select().where(Entry.time_spent == int(user_choice))) except IndexError: return " No matches found!" break