def testBytesPassword(self): db = pg8000.connect(**db_connect) # Create user username = '******' password = u('cha\uFF6Fs') cur = db.cursor() # Delete user if left over from previous run try: cur.execute("drop role " + username) except pg8000.ProgrammingError: cur.execute("rollback") cur.execute( "create user " + username + " with password '" + password + "';") cur.execute('commit;') db.close() data = db_connect.copy() data['user'] = username data['password'] = password.encode('utf8') data['database'] = 'pg8000_md5' if PY2: self.assertRaises( pg8000.ProgrammingError, pg8000.connect, **data) else: self.assertRaisesRegex( pg8000.ProgrammingError, '3D000', pg8000.connect, **data) db = pg8000.connect(**db_connect) cur = db.cursor() cur.execute("drop role " + username) cur.execute("commit;") db.close()
def get_connection(self, key, host, port, user, password, dbname, use_cached=True): "Get and memoize connections to instances" if key in self.dbs and use_cached: return self.dbs[key] elif host != "" and user != "": try: service_check_tags = [ "host:%s" % host, "port:%s" % port ] if dbname: service_check_tags.append("db:%s" % dbname) if host == 'localhost' and password == '': # Use ident method connection = pg.connect("user=%s dbname=%s" % (user, dbname)) elif port != '': connection = pg.connect(host=host, port=port, user=user, password=password, database=dbname) else: connection = pg.connect(host=host, user=user, password=password, database=dbname) status = AgentCheck.OK self.service_check('postgres.can_connect', status, tags=service_check_tags) self.log.info('pg status: %s' % status) except Exception, e: status = AgentCheck.CRITICAL self.service_check('postgres.can_connect', status, tags=service_check_tags) self.log.info('pg status: %s' % status) raise
def testSocketMissing(): conn_params = { 'unix_sock': "/file-does-not-exist", 'user': "******"} with pytest.raises(pg8000.InterfaceError): pg8000.connect(**conn_params)
def get_connection(self, key, host, port, user, password, dbname, ssl, use_cached=True): "Get and memoize connections to instances" if key in self.dbs and use_cached: return self.dbs[key] elif host != "" and user != "": try: if host == 'localhost' and password == '': # Use ident method connection = pg.connect("user=%s dbname=%s" % (user, dbname)) elif port != '': connection = pg.connect(host=host, port=port, user=user, password=password, database=dbname, ssl=ssl) else: connection = pg.connect(host=host, user=user, password=password, database=dbname, ssl=ssl) except Exception as e: message = u'Error establishing postgres connection: %s' % (str(e)) service_check_tags = self._get_service_check_tags(host, port, dbname) self.service_check(self.SERVICE_CHECK_NAME, AgentCheck.CRITICAL, tags=service_check_tags, message=message) raise else: if not host: raise CheckException("Please specify a Postgres host to connect to.") elif not user: raise CheckException("Please specify a user to connect to Postgres as.") self.dbs[key] = connection return connection
def testGss(db_kwargs): db_kwargs["database"] = "pg8000_gss" # Should raise an exception saying gss isn't supported with pytest.raises( pg8000.InterfaceError, match="Authentication method 7 not supported by pg8000."): pg8000.connect(**db_kwargs)
def testBrokenPipe(self): db1 = pg8000.connect(**db_connect) db2 = pg8000.connect(**db_connect) cur1 = db1.cursor() cur2 = db2.cursor() cur1.execute("select pg_backend_pid()") pid1 = cur1.fetchone()[0] cur2.execute("select pg_terminate_backend(%s)", (pid1,)) self.assertRaises(pg8000.OperationalError, cur1.execute, "select 1") cur2.close() db2.close()
def postgres_probably_connect(name, username, host): """ Connect to postgres or die trying. """ import pg8000 module = pg8000 if host: con = pg8000.connect(host=host, user=username, database=name) else: con = pg8000.connect( username, unix_sock='/var/run/postgresql/.s.PGSQL.5432', database=name) return module, con
def import_json(json_file): if os.environ.get("LIFELOG_DB_USER") is None or \ os.environ.get("LIFELOG_DB_PASS") is None or \ os.environ.get("LIFELOG_DB_HOST") is None or \ os.environ.get("LIFELOG_DB_NAME") is None: print("Environment variables are not set!") sys.exit(1) print("Connecting...") json_data = json.load(open(json_file)) conn = pg8000.connect(user=os.environ.get("LIFELOG_DB_USER"), password=os.environ.get("LIFELOG_DB_PASS"), host=os.environ.get("LIFELOG_DB_HOST"), database=os.environ.get("LIFELOG_DB_NAME")) cursor = conn.cursor() print("Rebuilding tables...") sql = open("database.sql", "r").read() cursor.execute(sql) conn.commit() print("Inserting data...") for item in json_data: row = [item["name"], item["data"]] cursor.execute('INSERT INTO images (name, data) VALUES (%s, %s)', row) conn.commit() cursor.close()
def db_connect(): return pg8000.connect( host="host", user="******", password="******", database="database" )
def import_topics(topics_file, qrels_file): if os.environ.get("LIFELOG_DB_USER") is None or \ os.environ.get("LIFELOG_DB_PASS") is None or \ os.environ.get("LIFELOG_DB_HOST") is None or \ os.environ.get("LIFELOG_DB_NAME") is None: print("Environment variables are not set!") sys.exit(1) topics = [] topics_dom = et.parse(topics_file) for topic_el in topics_dom.iter('topic'): id, title, description, narrative = '', '' ,'' ,'' for node in topic_el: if node.tag == 'id': id = node.text elif node.tag == 'title': title = node.text elif node.tag == 'description': description = node.text elif node.tag == 'narrative': narrative = node.text topics.append(Topic(id, title, description, narrative)) print(topics) with open(qrels_file, 'r') as f: qrels = f.read().split('\n') images = {} for line in qrels: topic_id, _, image_path, relevance = line.split(' ') if relevance == '1': image = image_path.split('/')[-1].replace('.jpg', '') images[image] = topic_id conn = pg8000.connect(user=os.environ.get("LIFELOG_DB_USER"), password=os.environ.get("LIFELOG_DB_PASS"), host=os.environ.get("LIFELOG_DB_HOST"), database=os.environ.get("LIFELOG_DB_NAME")) cursor = conn.cursor() with progressbar.ProgressBar(max_value=len(topics)) as bar: i = 0 for topic in topics: cursor.execute('INSERT INTO topics (topic_id, title, description, narrative) ' 'VALUES (%s, %s, %s, %s)', [topic.id, topic.title, topic.description, topic.narrative]) i += 1 bar.update(i) with progressbar.ProgressBar(max_value=len(images)) as bar: i = 0 for image_id, topic_id in images.items(): cursor.execute('INSERT INTO images_topics (image_id, topic_id) ' 'VALUES (%s, %s)', [image_id, topic_id]) i += 1 bar.update(i) conn.commit() cursor.close()
def get_data(_type, _sort): conn = pg8000.connect(user="******", database="mondial") cursor = conn.cursor() sort_result = '' if _sort == 'name': sort_result = 'ORDER BY name' elif _sort in ('area', 'elevation'): sort_result = 'ORDER BY ' + _sort + ' DESC' if _type: sql = "SELECT name, elevation, area, type FROM lake WHERE type = %s " + sort_result cursor.execute(sql, [str(_type)]) else: sql = "SELECT name, elevation, area, type FROM lake " + sort_result cursor.execute(sql) # nested function def get_int(value): if isinstance(value, decimal.Decimal): return int(value) else: return None query_result = [] for row in cursor.fetchall(): item = { "name": row[0], "elevation": get_int(row[1]), "area": get_int(row[2]), "type": row[3] } query_result.append(item) return query_result
def __init__(self, user, password, database, host): """Connect to database""" self.__db = pg8000.connect(user=user, password=password, database=database, host=host) self.__cursor = self.__db.cursor()
def handler(pgsql): conn = pg8000.connect(**pgsql.dsn()) with closing(conn.cursor()) as cursor: cursor.execute("CREATE TABLE hello(id int, value varchar(256))") cursor.execute("INSERT INTO hello values(1, 'hello'), (2, 'ciao')") conn.commit() conn.close()
def create_default_database(self): with closing(pg8000.connect(**self.dsn(database='postgres'))) as conn: conn.autocommit = True with closing(conn.cursor()) as cursor: cursor.execute("SELECT COUNT(*) FROM pg_database WHERE datname='test'") if cursor.fetchone()[0] <= 0: cursor.execute('CREATE DATABASE test')
def sync(): tids = request.args.getlist('tid') if len(tids) == 0: return json.dumps({}) try: conn = pg8000.connect(user="******", password="******", database="cb15rna", host="cb15rna.ciacashmbpf0.us-east-1.rds.amazonaws.com") cur = conn.cursor() cur.execute("select time, tid, tpm*1000000, attempt from differential where tid in ("+','.join("'"+tid+"'" for tid in tids)+") order by tid,attempt,time") data = cur.fetchall() cur.close() conn.close() data_dict = {} for d in data: d[0] = str(d[0]) if d[3] not in data_dict.keys(): data_dict[d[3]] = {} if d[1] not in data_dict[d[3]].keys(): data_dict[d[3]][d[1]] = [[d[0],d[2]]] else: data_dict[d[3]][d[1]].append([d[0],d[2]]) # print json.dumps(data_dict) return json.dumps(data_dict) except: traceback.print_exc() return json.dumps(tids)
def get_table_data(exp_name, start, length, direction, col, query): try: conn = pg8000.connect(user="******", password="******", database="cb15rna", host="cb15rna.ciacashmbpf0.us-east-1.rds.amazonaws.com") cur = conn.cursor() qstr = construct_query(exp_name, start, length, direction, col) format_qstr = '%'+query+'%' cur.execute(qstr, (format_qstr,)) data = cur.fetchall() cur.close() cur = conn.cursor() cur.execute('select count(*) from '+exp_name) count = cur.fetchall() cur.close() cur = conn.cursor() cur.execute('select count(*) from '+exp_name+' where tid ilike %s',(format_qstr,)) filteredCount = cur.fetchall() cur.close() conn.close() dataList = list(data) return {'recordsTotal':count[0][0],'recordsFiltered':filteredCount[0][0],'data':dataList} except: traceback.print_exc()
def execute(sqls): connection = pg8000.connect(host=conf.HOST, user='******', password='******', database='dblp') cursor = connection.cursor() for sql in sqls: cursor.execute(sql.replace('%', '%%')) connection.commit() log.debug("Done processing.")
def main(): pg8000.paramstyle = "qmark" conn1 = sqlite3.connect('../data/dumps_new.db') conn2 = pg8000.connect(user="******", host="npaa4726", port=5432, database="cprodev", password="******") insert_cursor = conn2.cursor() select_cursor = conn1.execute(""" SELECT DATE , TIME , USER , ERROR , PROGRAM , SERVER , REQUEST , REQUEST_USER , REQUEST_DATE , REQUEST_TIME , REQUEST_DESCRIPTION FROM SAP_DUMPS_NEW WHERE DATE >= '2018-05-01' ORDER BY DATE , TIME """) count = 0 for row in select_cursor: insert_cursor.execute(""" INSERT INTO SAP_DUMPS( DATE , TIME , USER_ , ERROR , PROGRAM , SERVER , REQUEST , REQUEST_USER , REQUEST_DATE , REQUEST_TIME , REQUEST_DESCRIPTION ) VALUES ( ? -- DATE , ? -- TIME , ? -- USER , ? -- ERROR , ? -- PROGRAM , ? -- SERVER , ? -- REQUEST , ? -- REQUEST_USER , ? -- REQUEST_DATE , ? -- REQUEST_TIME , ? -- REQUEST_DESCRIPTION ) """, args=row) count += 1 if count % 1000 == 0: print() print(count) print() else: print(".", end="") conn2.commit()
def is_connection_available(self): try: with closing(pg8000.connect(**self.dsn(database='template1'))): pass except pg8000.Error: return False else: return True
def _get_connection(self, host, port, user, password, dbname): if host != "" and user != "": if host == "localhost" and password == "": # Use ident method connection = pg.connect("user=%s dbname=%s" % (user, dbname)) elif port != "": connection = pg.connect(host=host, port=port, user=user, password=password, database=dbname, ssl=False) else: connection = pg.connect(host=host, user=user, password=password, database=dbname, ssl=ssl) else: if not host: raise CheckException("Please specify a Postgres host to connect to.") elif not user: raise CheckException("Please specify a user to connect to Postgres as.") return connection
def connect(self): self.connection = pg8000.connect( host=self.host, port=self.port, database=self.database, user=self.user, password=self.password) self.connection.autocommit = False self.cursor = self.connection.cursor() self.cursor.execute("set max_parallel_degree = 64;") self.cursor.execute("set temp_tablespaces = temp_tbs;") self.cursor.execute("select pg_reload_conf();")
def _is_connection_available(self): """Tests if the connection to the new cluster is available """ try: with closing(pg8000.connect(**self.dsn)): return True except pg8000.Error: return False
def testSsl(self): data = db_connect.copy() data["ssl"] = True if PRE_26: self.assertRaises(pg8000.InterfaceError, pg8000.connect, **data) else: db = pg8000.connect(**data) db.close()
def testBytesPassword(con, db_kwargs): # Create user username = '******' password = '******' with con.cursor() as cur: cur.execute( "create user " + username + " with password '" + password + "';") con.commit() db_kwargs['user'] = username db_kwargs['password'] = password.encode('utf8') db_kwargs['database'] = 'pg8000_md5' with pytest.raises(pg8000.ProgrammingError, match='3D000'): pg8000.connect(**db_kwargs) cur.execute("drop role " + username) con.commit()
def get_pg_conn(db_host, db, db_user, db_pwd, schema_name, db_port=5439, query_group=None, query_slot_count=1, ssl=True, **kwargs): conn = None if debug: comment('Connect %s:%s:%s:%s' % (db_host, db_port, db, db_user)) try: conn = pg8000.connect(user=db_user, host=db_host, port=int(db_port), database=db, password=db_pwd, ssl=ssl, timeout=None) conn._usock.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1) conn.autocommit = True except Exception as e: print("Exception on Connect to Cluster: %s" % e) print('Unable to connect to Cluster Endpoint') cleanup(conn) return None # set search paths aws_utils.set_search_paths(conn, schema_name, exclude_external_schemas=True) if query_group is not None and query_group != '': set_query_group = 'set query_group to %s' % query_group if debug: comment(set_query_group) run_commands(conn, [set_query_group]) set_slot_count = None if query_slot_count is not None and query_slot_count > 1: set_slot_count = 'set wlm_query_slot_count = %s' % query_slot_count if set_slot_count is not None: if debug: comment(set_slot_count) run_commands(conn, [set_slot_count]) # set a long statement timeout set_timeout = "set statement_timeout = '36000000'" if debug: comment(set_timeout) run_commands(conn, [set_timeout]) # set application name set_name = "set application_name to 'AnalyzeVacuumUtility-v%s'" % __version__ if debug: comment(set_name) run_commands(conn, [set_name]) comment("Connected to %s:%s:%s as %s" % (db_host, db_port, db, db_user)) return conn
def getPgConn(jdbc_file_path = 'data/jdbc.properties'): import pg8000 prop = readProperties(jdbc_file_path) conn = pg8000.connect( user=prop['jdbc.user'], password=prop['jdbc.password'], host=prop['jdbc.host'], port=int(prop['jdbc.port']), database=prop['jdbc.dbname']) return conn
def make(klass): import pg8000 import sustentacao.conf as conf import sustentacao.security.repository as ssr pg8000.paramstyle = "qmark" dbcon = pg8000.connect(**conf.SECURITY_CONFIG["pg_settings"]) repo = ssr.DbSecurityRepo(dbcon) controller = klass(repo) return controller
def connect_db(): return pg8000.connect( database=os.environ['DATABASE'], host=os.environ['HOST'], port=int(os.environ['APP_PORT']), user=os.environ['USER'], password=os.environ['PASSWORD'], ssl=True )
def make(klass): import pg8000 pg8000.paramstyle = "qmark" dbcon = pg8000.connect(**conf.JOBS_CONFIG["pg_settings"]) interactor = sp.Interactor.make() jobs_repo = sjr.JobsRepo(dbcon) report_renderer = ReportRenderer(interactor) jobs_controller = klass(jobs_repo, report_renderer) return jobs_controller
def connect_to_postgres(): try: return connect(host=host, user=u"postgres", port=port, database=database) except (InterfaceError, ProgrammingError) as e: Message.new( message_type=u"acceptance:integration:postgres_connect", exception=unicode(e.__class__), reason=unicode(e)).write() return False
def main(): """ Main function """ parser = argparse.ArgumentParser( description=""" Export FloodFront marker data into CSV format. """) parser.add_argument( '--since', type=str, help=""" Narrow selection to markers after this date. YYYY-MM-DD """) parser.add_argument('-o', '--output', type=str, help=""" File output name. """) args = parser.parse_args() if (args.since is not None) and (re.search("^[0-9]{4}-[0-9]{2}-[0-9]{2}$", args.since) is None): raise ValueError( "Invalid date entered {0}. Date must be YYYY-MM-DD".format( args.since)) engine = settings.DATABASES['default']['ENGINE'] if engine == 'django.db.backends.sqlite3': sqlite_db_file = os.path.join(config['floodfront']['ProjectPath'], 'db.sqlite3') conn = sqlite.connect(sqlite_db_file) tty_print('Using SQLite engine.') elif engine == 'django.db.backdends.postgresql': conn = pg.connect(user="******", database="floodfront") tty_print('Using Postgres engine.') else: sys.stderr.write('Engine not supported: {0}'.format(engine)) exit(1) cursor = conn.cursor() query = """ SELECT id, email, lat, lon, accuracy, created_on, marker_type, description FROM server_marker """ if args.since is not None: tty_print("Searching for date {0}".format(args.since)) query = query + "WHERE created_on >= '{0}'".format(args.since) else: now = strftime("%Y-%m-%d", gmtime()) tty_print("Searching for date {0} (default today)".format(now)) query = query + "WHERE created_on >= '{0}'".format(now) cursor.execute(query) result = cursor.fetchall() result = sorted(result, key=lambda row: row[5], reverse=True) filename = "marker.csv" if args.output is not None: filename = args.output if sys.stdout.isatty(): with open(filename, 'w') as csvfile: writer = csv.writer(csvfile, dialect='excel') for row in result: if isinstance(row[5], datetime.datetime): created_on = row[5].strftime("%Y-%m-%dT%H:%M:%S") elif isinstance(row[5], str): created_on = str( datetime.datetime.strptime(row[5], '%Y-%m-%d %H:%M:%S.%f')) writer.writerow([ row[0], row[1], round(float(row[2]), 6), round(float(row[3]), 6), round(float(row[4]), 6), created_on, type_to_class(row[6]), (str(row[7] or "No description.")) ]) else: # Pipe mode writer = csv.writer(sys.stdout, dialect='excel') for row in result: if isinstance(row[5], datetime.datetime): created_on = row[5].strftime("%Y-%m-%dT%H:%M:%S") elif isinstance(row[5], str): created_on = str( datetime.datetime.strptime(row[5], '%Y-%m-%d %H:%M:%S.%f')) writer.writerow([ row[0], row[1], round(float(row[2]), 6), round(float(row[3]), 6), round(float(row[4]), 6), created_on, type_to_class(row[6]), (str(row[7] or "No description.")) ])
def handler(event, context): start_time = time.time() # open database connection pg = pg8000.connect(host=os.environ['database_host'], port=5432, database=os.environ['database_name'], user=os.environ['database_user'], password=os.environ['database_password']) # get clans pg_cursor = pg.cursor() pg_cursor.execute( "SELECT clan_id, clan_name FROM groups.t_clans WHERE clan_name NOT IN ('Iron Orange 3rd Bn', 'Iron Orange Moon') ORDER BY clan_name" ) clans = pg_cursor.fetchall() # close database connection pg.close() # get active members active = {} request_details = [] for clan in clans: url = os.environ['clan_url'].format(clan[0]) #print(url) # debugging clan.append(url) request_details.append(clan) responses = process_requests(request_details) active_members = [] for response in responses: for member in response[3]: if 'isOnline' in member.keys(): if member['isOnline']: #print(member['destinyUserInfo']['displayName']) # debugging active_member = {} active_member['clan_name'] = response[1] active_member['gamertag'] = member['destinyUserInfo'][ 'displayName'] active_member['destiny'] = member['destinyUserInfo'] active_member['profile_url'] = os.environ[ 'character_url'].format( member['destinyUserInfo']['membershipType'], member['destinyUserInfo']['membershipId']) active_members.append(active_member) #print(active_members) # debugging character_activities = process_profile_requests(active_members) #print(json.dumps(character_activities)) print('Sorting character activities') character_activities = sorted(character_activities, key=itemgetter('clan_name')) actives = {} for gamertag in character_activities: if len(gamertag['activity_detail']) > 0: combined = '{0}: {1}'.format(gamertag['gamertag'], gamertag['activity_detail']) else: combined = gamertag['gamertag'] if gamertag['clan_name'] not in actives: actives[gamertag['clan_name']] = [] actives[gamertag['clan_name']].append(combined) active_members = {} print(json.dumps(actives)) for clan in actives: sorted_actives = sorted(actives[clan], key=str.lower) single_string = '\n'.join(sorted_active for sorted_active in sorted_actives) active_members[clan] = {} active_members[clan] = single_string #print(json.dumps(active_members)) # debugging duration = time.time() - start_time print('Duration: {0:.2f}s'.format(duration)) r.hmset('online', {"json": json.dumps(active_members)}) return active_members
def setUp(self): self.db = pg8000.connect(**db_connect) self.cursor = self.db.cursor()
def test_parameterized_record_query(pg_params, queries): with db.connect(**pg_params, row_factory=dict_row) as conn: t.run_parameterized_record_query(conn, queries, "pg", date)
import pg8000 as pg city = "tucson" conn = pg.connect(user='******', password='******', host='postgresql.crvadswmow49.us-west-2.rds.amazonaws.com', database='Networkland') # port default 5432 cursor = conn.cursor() query = "SELECT gid, ST_StartPoint(geom), ST_EndPoint(geom), ST_Length(geom) FROM {}".format( city) cursor.execute(query) results = cursor.fetchall() print 'results fetched' nodes_gid = {} # (fromnode, tonode) : gid nodes_length = {} # (fromnode, tonode) : length for row in results: if (row[1], row[2]) not in nodes_gid: nodes_gid[(row[1], row[2])] = row[0] nodes_length[(row[1], row[2])] = row[3] else: duplicate = nodes_gid[(row[1], row[2])] print "ERROR: {} HAS DUPLICATE NODES with {}".format(row[0], duplicate)
def testSsl(self): data = db_connect.copy() data["ssl"] = True db = pg8000.connect(**data) db.close()
def main(): module = AnsibleModule(argument_spec=dict( login_user=dict(default="rs_master"), login_password=dict(default="", no_log=True), login_host=dict(default=""), login_unix_socket=dict(default=""), login_ssl=dict(default=True, type='bool'), port=dict(default=5439, type='int'), db=dict(required=True), user=dict(default=''), password=dict(default=None, no_log=True), update_password=dict(default="always", choices=["always", "on_create"]), group=dict(default=''), state=dict(default="present", choices=["absent", "present"]), permission_flags=dict(default=[], type='list'), privs=dict(default=[], type='list'), expires=dict(default=None), conn_limit=dict(default=None)), supports_check_mode=True) user = module.params["user"] password = module.params["password"] update_password = module.params["update_password"] state = module.params["state"] db = module.params["db"] group = module.params["group"] port = module.params["port"] permission_flags = module.params["permission_flags"] privs = module.params["privs"] expires = module.params["expires"] conn_limit = module.params["conn_limit"] if not pg8000_found: module.fail_json(msg="the python pg8000 module is required") # To use defaults values, keyword arguments must be absent, so # check which values are empty and don't include in the **kw # dictionary params_map = { "login_host": "host", "login_user": "******", "login_password": "******", "login_ssl": "ssl", "port": "port", "db": "database" } kw = dict((params_map[k], v) for (k, v) in iteritems(module.params) if k in params_map and v != "") # If a login_unix_socket is specified, incorporate it here. is_localhost = "host" not in kw or kw["host"] == "" or kw[ "host"] == "localhost" if is_localhost and module.params["login_unix_socket"] != "": kw["host"] = module.params["login_unix_socket"] cursor = None try: pg8000.paramstyle = "pyformat" db_connection = pg8000.connect(**kw) db_connection.autocommit = False cursor = db_connection.cursor() except InterfaceError: e = get_exception() module.fail_json( msg= "unable to connect to database, check credentials and SSL flag!: %s " % e) except Exception: e = get_exception() module.fail_json(msg="unable to connect to database: %s" % e) kw = {'user': user, 'group': group} changed = False user_added = False group_added = False user_removed = False group_removed = False # =========================================== # Main decision tree # try: if state == "present": if user != '': if not user_exists(cursor, user): user_change(cursor, user, password, permission_flags, expires, conn_limit) changed = True user_added = True else: current_user_data = get_user(cursor, user) if update_password == "on_create": password = None user_change(cursor, user, password, permission_flags, expires, conn_limit, 'ALTER') updated_user_data = get_user(cursor, user) changed = update_password == "always" or current_user_data != updated_user_data if user == '' and group != '' and not group_exists(cursor, group): group_add(cursor, group) changed = True group_added = True group_updated = False if user != '': group_updated = group_assign(cursor, group, user) privs_updated = apply_privs(cursor, privs, user, group) changed = changed or group_updated or privs_updated # absent case else: if user != '' and user_exists(cursor, user): group_assign(cursor, None, user) user_delete(cursor, user) changed = True user_removed = True if user == '' and group != '' and group_exists(cursor, group): group_delete(cursor, group) changed = True group_removed = True except ValueError: e = get_exception() module.fail_json(msg="Invalid module input: %s" % e) except (InterfaceError, DatabaseError): e = get_exception() module.fail_json(msg="Database error occured: %s" % e) if changed: if module.check_mode: db_connection.rollback() else: db_connection.commit() cursor.close() kw['changed'] = changed kw['user_added'] = user_added kw['group_added'] = group_added kw['user_removed'] = user_removed kw['group_removed'] = group_removed module.exit_json(**kw)
#!/usr/bin/python3 import pprint, time # Connect to Mongo DB import pymongo mongo = pymongo.MongoClient('localhost', 3001) mongodb = mongo.meteor mdata = mongodb.markdata # Connect to Postgresql DB import pg8000 dbc = pg8000.connect(user="", password="", host="", port=5432, database='') db = dbc.cursor() # Make a list of dictionaries with the marketing days with # "_id", "mark_attempted" and "money_spent" # get a list of marketing days from Postgresql print( time.ctime(), 'Obtaining data from Postgres with "_id", "mark_attempted" and "money_spent" ' ) db.execute(''' SELECT date(sample_col_1)::text as _id, count(sample_col_2)::integer as mark_attempted FROM mark_attempts GROUP BY date(sample_col_1) ORDER BY date(sample_col_1) ASC ''') rawMarkDays = db.fetchall() # transform the list obtained from Postgresql to a list of dictionaries
def test_record_query(pg_params, queries): with db.connect(**pg_params, row_factory=dict_row) as conn: t.run_record_query(conn, queries)
def random_lake_sentence(lakes, sentences): lake = random.choice(lakes) possible_keys = [k for k in lake.keys() if k != 'name' \ and lake[k] is not None] col = random.choice(possible_keys) sentence_template = sentences[col] output = sentence_template.format(lake['name'], lake[col]) return output twitter = twython.Twython(api_key, api_secret, access_token, token_secret) lakes = [] conn = pg8000.connect(database="mondial") # may need extra auth info! cursor = conn.cursor() cursor.execute("SELECT name, area, depth, elevation, type, river FROM lake") for row in cursor.fetchall(): lake = { 'name': row[0], 'area': row[1], 'depth': row[2], 'elevation': row[3], 'type': row[4], 'river': row[5] } lakes.append(lake) sentences = { 'area': 'The area {} is {} square kilometers.',
import pg8000 as pg import time import config # This file is exported from print '...' start_time = time.time() folder = 'D:/Will/Metropia/Network Updates/New York/Update 6-28-2016/' city = "osm_newyork" out_boundary = open(folder + city + '_polygon.json', 'wb') conn = pg.connect(user=config.NETWORKLAND_USER, password=config.NETWORKLAND_PASSWORD, host=config.NETWORKLAND_URL, database=config.NETWORKLAND_DB) # port default 5432 cursor = conn.cursor() query = "SELECT ST_AsGeoJSON(geom) FROM {}_polygons WHERE type='boundary'".format( 'newyork') cursor.execute(query) results = cursor.fetchall() for row in results: out_boundary.write(row[0]) print("Elapsed time... %s seconds" % (time.time() - start_time))
def snapshot(config_sources): aws_region = get_config_value(['AWS_REGION'], config_sources) set_debug = get_config_value([ 'DEBUG', 'debug', ], config_sources) if set_debug is not None and set_debug.upper() == 'TRUE': global debug debug = True kms = boto3.client('kms', region_name=aws_region) if debug: print("Connected to AWS KMS & CloudWatch in %s" % aws_region) user = get_config_value(['DbUser', 'db_user', 'dbUser'], config_sources) host = get_config_value( ['HostName', 'cluster_endpoint', 'dbHost', 'db_host'], config_sources) port = int( get_config_value(['HostPort', 'db_port', 'dbPort'], config_sources)) database = get_config_value(['DatabaseName', 'db_name', 'db'], config_sources) # we may have been passed the password in the configuration, so extract it if we can pwd = get_config_value(['db_pwd'], config_sources) if pwd is None: enc_password = get_config_value([ 'EncryptedPassword', 'encrypted_password', 'encrypted_pwd', 'dbPassword' ], config_sources) # resolve the authorisation context, if there is one, and decrypt the password auth_context = get_config_value('kms_auth_context', config_sources) if auth_context is not None: auth_context = json.loads(auth_context) try: if auth_context is None: pwd = kms.decrypt( CiphertextBlob=base64.b64decode(enc_password))['Plaintext'] else: pwd = kms.decrypt( CiphertextBlob=base64.b64decode(enc_password), EncryptionContext=auth_context)['Plaintext'] except: print('KMS access failed: exception %s' % sys.exc_info()[1]) print('Encrypted Password: %s' % enc_password) print('Encryption Context %s' % auth_context) raise # Connect to the cluster try: if debug: print('Connecting to Redshift: %s' % host) conn = pg8000.connect(database=database, user=user, password=pwd, host=host, port=port, ssl=ssl) except: print('Redshift Connection Failed: exception %s' % sys.exc_info()[1]) raise if debug: print('Successfully Connected to Cluster') # create a new cursor for methods to run through cursor = conn.cursor() # set application name set_name = "set application_name to 'RedshiftSystemTablePersistence-v%s'" % __version__ if debug: print(set_name) cursor.execute(set_name) # load the table configuration table_config = json.load( open( os.path.dirname(__file__) + '/lib/history_table_config.json', 'r')) # create the dependent objects if we need to create_schema_objects(cursor, conn) # snapshot stats into history tables insert_rowcounts = snapshot_system_tables(cursor, conn, table_config) # cleanup history tables if requested in the configuration delete_rowcounts = None cleanup_after_days = get_config_value( [config_constants.SYSTABLE_CLEANUP_AFTER_DAYS], config_sources) if cleanup_after_days is not None: try: cleanup_after_days = int(cleanup_after_days) except ValueError: print("Configuration value '%s' must be an integer" % config_constants.SYSTABLE_CLEANUP_AFTER_DAYS) raise if cleanup_after_days > 0: delete_rowcounts = cleanup_snapshots(cursor, conn, cleanup_after_days, table_config) cursor.close() conn.close() return {"inserted": insert_rowcounts, "deleted": delete_rowcounts}
def __init__(self, *args, **kwargs): self.conn = pg8000.connect(*args, **kwargs) self.apply_migrations()
def main(): opts = optparse.OptionParser() opts.add_option('-e', '--exchange', help='exchange name') opts.add_option('-t', '--sec_type', help='sec type: ' + ', '.join(types)) opts.add_option('-f', '--file', help='security symbol list file') opts = opts.parse_args()[0] if not opts.exchange: print("Error: --exchange not give") return if not opts.file: print("Error: --file not give") return if not opts.sec_type: print("Error: --sec_type not give") return if opts.sec_type not in types: print("Error: invalid sec_type") return conn = pg8000.connect(host='127.0.0.1', database='opentrade', user='******', password='******') cursor = conn.cursor() exchanges = {} cursor.execute('select name, id from exchange') for m in cursor.fetchall(): exchanges[m[0]] = m[1] exchange_id = exchanges.get(opts.exchange) if exchange_id is None: print('unknown exchange: ' + opts.exchange) return cursor.execute('select bbgid from security') bbgids = set([r[0] for r in cursor.fetchall()]) with open(opts.file) as fh: fh.readline() for line in fh: toks = line.strip().split(',') fields = [ 'symbol', 'local_symbol', '', 'currency', 'bbgid', 'sedol', 'isin', 'cusip', 'close_price', 'adv20', 'market_cap', 'sector', 'industry_group', 'industry', 'sub_industry', '', 'lot_size', 'multiplier', '' ] values = [] valid_fields = [] bbgid = None for i in range(0, len(fields)): f = fields[i] if not f: continue valid_fields.append(fields[i]) values.append(toks[i]) v = values[-1] if not v: values[-1] = None continue if f == 'bbgid': bbgid = v if f in ('close_price', 'adv20', 'market_cap', 'lot_size', 'multiplier'): values[-1] = float(v) if f in ('sector', 'industry_group', 'industry', 'sub_industry'): values[-1] = int(v) values[0] = values[0].split()[0] if bbgid not in bbgids: valid_fields.append('"type"') values.append(opts.sec_type) valid_fields.append('exchange_id') values.append(exchange_id) cursor.execute( 'insert into security (' + ', '.join(valid_fields) + ') values(' + ', '.join(['%s'] * len(valid_fields)) + ')', values) else: cursor.execute( 'update security set ' + ', '.join([f + '=%s' for f in valid_fields]) + ' where bbgid=%s', values + [bbgid]) conn.commit()
def _get_connection(self): return pg8000.connect(database=self.database, user=self.user, password=self.password, host=self.host, port=self.port)
def get_pg_conn(): global db_connections pid = str(os.getpid()) conn = None # get the database connection for this PID try: conn = db_connections[pid] except KeyError: pass if conn == None: # connect to the database if debug: comment('Connect [%s] %s:%s:%s:%s' % (pid, db_host, db_port, db, db_user)) try: conn = pg8000.connect(user=db_user, host=db_host, port=db_port, database=db, password=db_pwd, ssl=ssl_option, timeout=None, keepalives=1, keepalives_idle=200, keepalives_interval=200, keepalives_count=5) except Exception as e: write(e) write('Unable to connect to Cluster Endpoint') cleanup() return ERROR # set default search path search_path = 'set search_path = \'$user\',public,%s' % ( analyze_schema) if target_schema != None and target_schema != analyze_schema: search_path = search_path + ', %s' % (target_schema) if debug: comment(search_path) cursor = None try: cursor = conn.cursor() cursor.execute(search_path) except pg8000.Error as e: if re.match('schema "%s" does not exist' % (analyze_schema, ), e.message) != None: write('Schema %s does not exist' % (analyze_schema, )) else: write(e.message) return None if query_group != None: set_query_group = 'set query_group to %s' % (query_group) if debug: comment(set_query_group) cursor.execute(set_query_group) if query_slot_count != None and query_slot_count != 1: set_slot_count = 'set wlm_query_slot_count = %s' % ( query_slot_count) if debug: comment(set_slot_count) cursor.execute(set_slot_count) # set a long statement timeout set_timeout = "set statement_timeout = '1200000'" if debug: comment(set_timeout) cursor.execute(set_timeout) # cache the connection db_connections[pid] = conn return conn
def make_db_connection(): try: return pg8000.connect(**CREDENTIALS) except pg8000.DatabaseError as e: print(str(e)) sys.exit(1)
def __init__(self, connection_file): settings = read_settings(connection_file) self.connection = pg8000.connect( user=settings['db-user'], password=settings['db-password'], database=settings['db-database'])
import pg8000 db = { "DB_HOST": "localhost", "DB": "PostgreSQL 9.6", "DB_USER": "******", "DB_PORT": 5432 } conn = pg8000.connect(host=db["DB_HOST"], port=db["DB_PORT"], user=db["DB_USER"], password='******') cursor = conn.cursor() cursor.execute("CREATE TEMPORARY TABLE blacklist (id SERIAL, title TEXT)") cursor.execute( "INSERT INTO blacklist (title) VALUES (%s), (%s), (%s) RETURNING id, title", ("*****@*****.**", "*****@*****.**", "*****@*****.**")) results = cursor.fetchall() for row in results: id, title = row print("id = %s, title = %s" % (id, title)) s = "SELECT title from blacklist where title like 'n%'" cursor.execute(s) print(cursor.fetchall()) def deleter(term): s = "DELETE FROM blacklist where title = '" + term + "'"
text="Stücklisten", values=("", "", "")) for part in getPartLists(cur): tree.insert(partlistTab, "end", f"d{part[0]}", text='', values=(part[1], part[2], part[3])) tree.pack(side=tk.LEFT, fill=tk.BOTH) master.mainloop() if __name__ == "__main__": con = pg8000.connect(user="******", host='192.168.2.68', password="", database="sven") cur = con.cursor() #createCustomer(cur) #createParts(cur) #createProjects(cur) #createPartLists(cur) showDatabase(cur) con.commit() con.close()
def main(): import socket import pg8000 import dpkt from sys import exit import python_TZSP # import python_sys_log_sql # 配置连接IP和端口号 UDP_IP = '0.0.0.0' UDP_PORT = 9999 # 创建socket,IPv4,UDP协议的实例 sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) # 对实例端口进行绑定 sock.bind((UDP_IP, UDP_PORT)) # encoding = "utf-8" conn = pg8000.connect(host='127.0.0.1', user='******', password='******', database='mklogdb') cursor = conn.cursor() while True: try: # 通过socket接收数据 data, addr = sock.recvfrom(1024) # Test # print('RAW data is :', str(data)) # print('RAW addr is :', addr) # # print('TZSP type is:', getType(data)) # print('TZSP protocol is:', getProtocol(data)) # print('TZSP TagType is:', getTagType(data)) # 截取封装数据包 encapsulated_packet_data = python_TZSP.get_encapsulated_packet( data) # 获取源mac地址 src_mac = python_TZSP.eth_addr(encapsulated_packet_data[6:12]) eth = dpkt.ethernet.Ethernet(encapsulated_packet_data) ip = eth.data src = socket.inet_ntoa(ip.src) dst = socket.inet_ntoa(ip.dst) # print('dpkt src: ', src, ' dpkt dst: ', dst) tcp = ip.data # print(src_mac) print(src) # print(dst) # http = dpkt.http.Response(encapsulated_packet_data) # print(http) if tcp.dport == 80 and len(tcp.data) > 0: http = dpkt.http.Request(tcp.data) if http.uri is not None: if http.uri[-4:] not in ( '.jpg', '.png', '.gif', '.css', '.ico') and http.uri[-3:] not in '.js': # print('uri is :::', http.uri) # print('http header is :::', http.headers) URL = http.headers['host'] + http.uri # if 'user-agent' in http.headers.keys(): # print('user-agent is :', http.headers['user-agent']) # print('http header is :::', http.headers) # print('i_src_ip is :', src) # print('i_dst_ip is :', dst) # print('i_src_mac is :', src_mac) print('URL is :', URL) # print('i_v_system is :', http.headers['user-agent']) src_str = "\'" + str(src) + "\'" dst_str = "\'" + str(dst) + "\'" src_mac_str = "\'" + str(src_mac) + "\'" URL_str = "\'" + str(URL) + "\'" http_headers_str = "\'" + str( http.headers['user-agent']) + "\'" str_execute = "insert into urllist(src_ip, dst_ip, src_mac, url, v_system, v_time) values (" + src_str + "," + dst_str + "," + src_mac_str + "," + URL_str + "::varchar(500), " + http_headers_str + "::varchar(150), current_timestamp)" # print(str_execute) cursor.execute(str_execute) conn.commit() elif tcp.dport == 443 and len(tcp.data) > 0: pass except Exception as e: print(str(e)) continue except KeyboardInterrupt: print('程序终止。') sock.close() exit()
def get_pg_conn(): global db_connections pid = str(os.getpid()) conn = None # get the database connection for this PID try: conn = db_connections[pid] except KeyError: pass if conn is None: # connect to the database if debug: comment('Connect [%s] %s:%s:%s:%s' % (pid, db_host, db_port, db, db_user)) try: conn = pg8000.connect(user=db_user, host=db_host, port=db_port, database=db, password=db_pwd, ssl=ssl, timeout=None) # Enable keepalives manually untill pg8000 supports it # For future reference: https://github.com/mfenniak/pg8000/issues/149 # TCP keepalives still need to be configured appropriately on OS level as well conn._usock.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1) conn.autocommit = True except Exception as e: print(e) print('Unable to connect to Cluster Endpoint') cleanup(conn) return ERROR aws_utils.set_search_paths(conn, schema_name, target_schema) if query_group is not None: set_query_group = 'set query_group to %s' % query_group if debug: comment(set_query_group) run_commands(conn, [set_query_group]) if query_slot_count is not None and query_slot_count != 1: set_slot_count = 'set wlm_query_slot_count = %s' % query_slot_count if debug: comment(set_slot_count) run_commands(conn, [set_slot_count]) # set a long statement timeout set_timeout = "set statement_timeout = '%s'" % statement_timeout if debug: comment(set_timeout) run_commands(conn, [set_timeout]) # set application name set_name = "set application_name to 'ColumnEncodingUtility-v%s'" % __version__ if debug: comment(set_name) run_commands(conn, [set_name]) # Set search_path set_searchpath = "set search_path to '$user', public, %s;" % schema_name if debug: comment(set_searchpath) run_commands(conn, [set_searchpath]) # turn off autocommit for the rest of the executions conn.autocommit = False # cache the connection db_connections[pid] = conn return conn
def connect( connection: Optional[str] = None, secret_id: Optional[str] = None, catalog_id: Optional[str] = None, dbname: Optional[str] = None, boto3_session: Optional[boto3.Session] = None, ssl_context: Optional[Dict[Any, Any]] = None, timeout: Optional[int] = None, tcp_keepalive: bool = True, ) -> pg8000.Connection: """Return a pg8000 connection from a Glue Catalog Connection. https://github.com/tlocke/pg8000 Parameters ---------- connection : Optional[str] Glue Catalog Connection name. secret_id: Optional[str]: Specifies the secret containing the version that you want to retrieve. You can specify either the Amazon Resource Name (ARN) or the friendly name of the secret. catalog_id : str, optional The ID of the Data Catalog. If none is provided, the AWS account ID is used by default. dbname: Optional[str] Optional database name to overwrite the stored one. boto3_session : boto3.Session(), optional Boto3 Session. The default boto3 session will be used if boto3_session receive None. ssl_context: Optional[Dict] This governs SSL encryption for TCP/IP sockets. This parameter is forward to pg8000. https://github.com/tlocke/pg8000#functions timeout: Optional[int] This is the time in seconds before the connection to the server will time out. The default is None which means no timeout. This parameter is forward to pg8000. https://github.com/tlocke/pg8000#functions tcp_keepalive: bool If True then use TCP keepalive. The default is True. This parameter is forward to pg8000. https://github.com/tlocke/pg8000#functions Returns ------- pg8000.Connection pg8000 connection. Examples -------- >>> import awswrangler as wr >>> con = wr.postgresql.connect("MY_GLUE_CONNECTION") >>> with con.cursor() as cursor: >>> cursor.execute("SELECT 1") >>> print(cursor.fetchall()) >>> con.close() """ attrs: _db_utils.ConnectionAttributes = _db_utils.get_connection_attributes( connection=connection, secret_id=secret_id, catalog_id=catalog_id, dbname=dbname, boto3_session=boto3_session) if attrs.kind != "postgresql": raise exceptions.InvalidDatabaseType( f"Invalid connection type ({attrs.kind}. It must be a postgresql connection.)" ) return pg8000.connect( user=attrs.user, database=attrs.database, password=attrs.password, port=attrs.port, host=attrs.host, ssl_context=ssl_context, timeout=timeout, tcp_keepalive=tcp_keepalive, )
:param validation_timeout: Timeout to try to validate the connection :return: pg8000 connection """ Redshift._validate_connection(database=database, host=host, port=port, user=user, password=password, tcp_keepalive=tcp_keepalive, application_name=application_name, validation_timeout=validation_timeout) conn = pg8000.connect(database=database, host=host, port=int(port), user=user, password=password, ssl=True, application_name=application_name, tcp_keepalive=tcp_keepalive, timeout=connection_timeout) cursor = conn.cursor() cursor.execute(f"set statement_timeout = {statement_timeout}") conn.commit() cursor.close() return conn def get_connection(self, glue_connection): conn_details = self._session.glue.get_connection_details(name=glue_connection) props = conn_details["ConnectionProperties"] host = props["JDBC_CONNECTION_URL"].split(":")[2].replace("/", "") port, database = props["JDBC_CONNECTION_URL"].split(":")[3].split("/")
self._id = id self._city = city self._state = state self._country = country self._latt = latt self._long = long def __str__(self): return f"{self.__dict__}" st = time.time() conn = None try: conn = pg8000.connect(user='******', password='******', database='testdb') with conn.cursor() as cursor: cursor.execute('select * from spectrum_markets_geocode') results = cursor.fetchall() codes = [] for row in results: c = Geocode(id=row[0], city=row[1], state=row[2], country=row[3], latt=row[4], long=row[5]) codes.append(row)
def get_md5_config(): # 连接数据库 conn = pg8000.connect(host=psql_ip, user=psql_username, password=psql_password, database=psql_db_name) cursor = conn.cursor() # 查询数据库qytdb_devicedb,获取ip, type, name, snmp_ro_community, ssh_username, ssh_password, enable_password等信息 cursor.execute( "select ip, type, name, snmp_ro_community, ssh_username, ssh_password, enable_password from qytdb_devicedb" ) result = cursor.fetchall() for device in result: if device[1] == 'IOS Router': # 如果设备是路由器 # 获取设备show run run_config_raw = ssh_singlecmd(str(device[0]), device[4], device[5], 'show run') list_run_config = run_config_raw.split('\n') location = 0 host_location = 0 # 用来找到hostname出现的位置 for i in list_run_config: if re.match('.*hostname .*', i): host_location = location # 定位hostname所在位置 else: location += 1 list_run_config = list_run_config[ host_location:] # 截取hostname开始往后的部分 run_config = '\n'.join(list_run_config) # 再次还原为字串形式的配置 # 计算获取配置的MD5值 m = hashlib.md5() m.update(run_config.encode()) md5_value = m.hexdigest() # 获取最近一次保存配置的MD5值,注意添加设备的时候就会做一次备份! # order by date desc limit 1 查询最近一次记录 cursor.execute( "select hash from qytdb_deviceconfig where name = '" + device[2] + "' order by date desc limit 1") result = cursor.fetchall() if result[0][0] == md5_value: # 如果本次配置的MD5值,与上一次备份配置的MD5值相同!略过此次操作 continue else: # 如果本次配置的MD5值,与上一次备份配置的MD5值不相同,备份配置与MD5值到数据库 sqlcmd = "INSERT INTO qytdb_deviceconfig (name, hash, config, date) VALUES ('" + device[ 2] + "', '" + md5_value + "', '" + run_config + "', '" + str( datetime.now()) + "')" cursor.execute(sqlcmd) conn.commit() elif device[1] == 'Nexus Switch': # 如果设备是交换机 # 获取设备show run run_config_raw = ssh_singlecmd(str(device[0]), device[4], device[5], 'show run') list_run_config = run_config_raw.split('\n') location = 0 host_location = 0 # 用来找到hostname出现的位置 for i in list_run_config: if re.match('.*hostname .*', i): host_location = location # 定位hostname所在位置 else: location += 1 list_run_config = list_run_config[ host_location:] # 截取hostname开始往后的部分 run_config = '\n'.join(list_run_config) # 再次还原为字串形式的配置 # 计算获取配置的MD5值 m = hashlib.md5() m.update(run_config.encode()) md5_value = m.hexdigest() # 获取最近一次保存配置的MD5值,注意添加设备的时候就会做一次备份! # order by date desc limit 1 查询最近一次记录 cursor.execute( "select hash from qytdb_deviceconfig where name = '" + device[2] + "' order by date desc limit 1") result = cursor.fetchall() if result[0][0] == md5_value: # 如果本次配置的MD5值,与上一次备份配置的MD5值相同!略过此次操作 continue else: # 如果本次配置的MD5值,与上一次备份配置的MD5值不相同,备份配置与MD5值到数据库 sqlcmd = "INSERT INTO qytdb_deviceconfig (name, hash, config, date) VALUES ('" + device[ 2] + "', '" + md5_value + "', '" + run_config + "', '" + str( datetime.now()) + "')" cursor.execute(sqlcmd) conn.commit() elif device[1] == 'ASA Firewall': # 如果设备是ASA # 获取设备show run, 注意获取ASA配置的方法不一样 run_config_raw = ssh_multicmd_asa(str( device[0]), device[4], device[5], [ 'enable', device[6], 'terminal pager 0', 'more system:running-config' ]) list_run_config = run_config_raw.split('\n') location = 0 host_location = 0 # 用来找到hostname出现的位置 for i in list_run_config: if re.match('^hostname .*', i): # 注意匹配hostname的方法不一样,因为配置中会多次出现hostname host_location = location # 定位hostname所在位置 else: location += 1 list_run_config = list_run_config[ host_location:-4] # 截取hostname开始往后的部分, 去除最后一些无用部分 run_config = '\n'.join(list_run_config) # 再次还原为字串形式的配置 # 计算获取配置的MD5值 m = hashlib.md5() m.update(run_config.encode()) md5_value = m.hexdigest() # 获取最近一次保存配置的MD5值,注意添加设备的时候就会做一次备份! # order by date desc limit 1 查询最近一次记录 cursor.execute( "select hash from qytdb_deviceconfig where name = '" + device[2] + "' order by date desc limit 1") result = cursor.fetchall() if result[0][0] == md5_value: # 如果本次配置的MD5值,与上一次备份配置的MD5值相同!略过此次操作 continue else: # 如果本次配置的MD5值,与上一次备份配置的MD5值不相同,备份配置与MD5值到数据库 sqlcmd = "INSERT INTO qytdb_deviceconfig (name, hash, config, date) VALUES ('" + device[ 2] + "', '" + md5_value + "', '" + run_config + "', '" + str( datetime.now()) + "')" cursor.execute(sqlcmd) conn.commit()
def get_pg_conn(): global db_connections pid = str(os.getpid()) conn = None # get the database connection for this PID try: conn = db_connections[pid] except KeyError: pass if conn is None: # connect to the database if debug: comment('Connect [%s] %s:%s:%s:%s' % (pid, db_host, db_port, db, db_user)) try: conn = pg8000.connect(user=db_user, host=db_host, port=db_port, database=db, password=db_pwd, ssl=ssl, timeout=None) except Exception as e: print(e) print('Unable to connect to Cluster Endpoint') cleanup(conn) return ERROR # set default search path search_path = 'set search_path = \'$user\',public,%s' % schema_name if target_schema is not None and target_schema != schema_name: search_path = search_path + ', %s' % target_schema if debug: comment(search_path) cursor = None try: cursor = conn.cursor() cursor.execute(search_path) except pg8000.Error as e: if re.match('schema "%s" does not exist' % schema_name, e.message) is not None: print('Schema %s does not exist' % schema_name) else: print(e.message) return None if query_group is not None: set_query_group = 'set query_group to %s' % query_group if debug: comment(set_query_group) cursor.execute(set_query_group) if query_slot_count is not None and query_slot_count != 1: set_slot_count = 'set wlm_query_slot_count = %s' % query_slot_count if debug: comment(set_slot_count) cursor.execute(set_slot_count) # set a long statement timeout set_timeout = "set statement_timeout = '1200000'" if debug: comment(set_timeout) cursor.execute(set_timeout) # set application name set_name = "set application_name to 'ColumnEncodingUtility-v%s'" % __version__ if debug: comment(set_name) cursor.execute(set_name) # cache the connection db_connections[pid] = conn return conn
#亁颐堂官网www.qytang.com #乾颐盾是由亁颐堂现任明教教主开发的综合性安全课程 #包括传统网络安全(防火墙,IPS...)与Python语言和黑客渗透课程! import hashlib as hasher import datetime as date from random import randint import json import rsa_sign from QYTCoin_Class import * #初始化数据库,创建表qytcoin_chain import pg8000 conn = pg8000.connect(host='10.1.1.1', user='******', password='******', database='qytangdb') cursor = conn.cursor() cursor.execute("drop table qytcoin_chain") cursor.execute( "CREATE TABLE qytcoin_chain (index int,chain_datetime timestamp,previous_hash varchar(1000),transaction_data varchar(9999),transaction_data_id int,secure_code varchar(60),hash varchar(1000))" ) conn.commit() #创世区块 def create_genesis_block(): #创世区块创建的时间 genesis_block_datetime = date.datetime.now() #把创世区块插入数据库,除了时间都为空 conn = pg8000.connect(host='10.1.1.1',
def handle(event, context): try: connection = pg8000.connect(host=rds_host, port=5432, user=rds_username, password=rds_password, database=rds_db_name, connect_timeout=5) except: logger.error( "ERROR: Unexpected error: Could not connect to Postgres RDS instance." ) sys.exit() source = event['data']['object']['source'] cur = connection.cursor() # ---------------- DELETE EVENT AND INSERT ------------------------ created = datetime.datetime.fromtimestamp(int( event['created'])).strftime('%Y-%m-%d %H:%M:%S') data = (event['id'], event['livemode'], event['type'], created) try: cur.execute("DELETE FROM stripe.event WHERE id = '" + event['id'] + "'") cur.execute( "INSERT INTO stripe.event (id, livemode, type, created) VALUES (%s, %s, %s, %s)", data) except pg8000.Error as e: response = { "status": "Fail", "message": "Database error while handling events", "error": e } return response #this is a charge event, so we insert charges and respective billing data # ---------------- DELETE EVENT AND INSERT ------------------------ charge_event = event['data']['object'] created = datetime.datetime.fromtimestamp(int( charge_event['created'])).strftime('%Y-%m-%d %H:%M:%S') data = (charge_event['id'], charge_event["customer"], event['id'], charge_event["invoice"], charge_event["livemode"], charge_event["paid"], charge_event["amount"], charge_event["currency"], charge_event["refunded"], charge_event["captured"], charge_event["balance_transaction"], charge_event["failure_message"], charge_event["failure_code"], charge_event["amount_refunded"], charge_event["description"], charge_event["dispute"], created) try: cur.execute("DELETE FROM stripe.charge WHERE id = '" + event['data']['object']['id'] + "'") cur.execute( """INSERT INTO stripe.charge (id, event_id, customer_id, invoice_id, livemode, paid, amount, currency, refunded, captured, balance_transaction, failure_message, failure_code, amount_refunded, description, dispute, created) VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s )""", data) except pg8000.Error as e: response = { "status": "Fail", "message": "Database error while handling charges" } return response #connection.commit() # ---------------- HANDLE BILLING DETAILS ------------------------ billing_details = event['data']['object']['source'] data = ( billing_details['id'], billing_details["name"], billing_details["customer"], billing_details["object"], billing_details['last4'], billing_details["type"], billing_details["brand"], billing_details["exp_month"], billing_details["exp_year"], billing_details["fingerprint"], billing_details["country"], billing_details["address_line1"], billing_details["address_line2"], billing_details["address_state"], billing_details["address_zip"], billing_details["address_country"], billing_details["cvc_check"], billing_details["address_line1_check"], billing_details["address_zip_check"], ) try: cur.execute("DELETE FROM stripe.billing WHERE id = '" + billing_details['id'] + "'") cur.execute( """INSERT INTO stripe.billing ("id", "name", "customer_id", "object", "last4","type","brand", "exp_month","exp_year", "fingerprint", "country", "address_line1", "address_line2", "address_state", "address_zip", "address_country", "cvc_check", "address_line1_check", "address_zip_check") VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s)""", data) except pg8000.Error as e: response = { "status": "Fail", "message": "Database error while handling billing" } return response # ---------------- HANDLE BILLING DETAILS JOIN TABLE ------------------------ data = ( charge_event['id'], billing_details["id"], ) try: cur.execute("DELETE FROM stripe.charge_billing WHERE billing_id = '" + billing_details['id'] + "' AND charge_id = '" + charge_event['id'] + "'") cur.execute( """INSERT INTO stripe.charge_billing("charge_id", "billing_id") VALUES (%s, %s)""", data) except pg8000.Error as e: response = { "status": "Fail", "message": "Database error while handling charges and billing" } return response connection.commit() connection.close() logger.error("SUCCESS: copied to Postgres RDS instance.") return source
def connect_to_postgres(): try: return connect(host=host, user=user, port=port, database=database) except (InterfaceError, ProgrammingError): return False