def test_savepoint_lost_still_runs(self): User = self.classes.User s = self.session(bind=self.bind) trans = s.begin_nested() s.connection() u1 = User(name='ed') s.add(u1) # kill off the transaction nested_trans = trans._connections[self.bind][1] nested_trans._do_commit() is_(s.transaction, trans) assert_raises( sa_exc.DBAPIError, s.rollback ) assert u1 not in s.new is_(trans._state, _session.CLOSED) is_not_(s.transaction, trans) is_(s.transaction._state, _session.ACTIVE) is_(s.transaction.nested, False) is_(s.transaction._parent, None)
def test_info(self): p = self._queuepool_fixture(pool_size=1, max_overflow=0) c = p.connect() self.assert_(not c.info) self.assert_(c.info is c._connection_record.info) c.info['foo'] = 'bar' c.close() del c c = p.connect() self.assert_('foo' in c.info) c.invalidate() c = p.connect() self.assert_('foo' not in c.info) c.info['foo2'] = 'bar2' c.detach() self.assert_('foo2' in c.info) c2 = p.connect() is_not_(c.connection, c2.connection) assert not c2.info assert 'foo2' in c.info
def test_no_instance_level_collections(self): @event.listens_for(self.Target, "event_one") def listen_one(x, y): pass t1 = self.Target() t2 = self.Target() t1.dispatch.event_one(5, 6) t2.dispatch.event_one(5, 6) is_( t1.dispatch.__dict__['event_one'], self.Target.dispatch.event_one.\ _empty_listeners[self.Target] ) @event.listens_for(t1, "event_one") def listen_two(x, y): pass is_not_( t1.dispatch.__dict__['event_one'], self.Target.dispatch.event_one.\ _empty_listeners[self.Target] ) is_( t2.dispatch.__dict__['event_one'], self.Target.dispatch.event_one.\ _empty_listeners[self.Target] )
def test_no_instance_level_collections(self): @event.listens_for(self.Target, "event_one") def listen_one(x, y): pass t1 = self.Target() t2 = self.Target() t1.dispatch.event_one(5, 6) t2.dispatch.event_one(5, 6) is_( self.Target.dispatch._empty_listener_reg[self.Target]["event_one"], t1.dispatch.event_one, ) @event.listens_for(t1, "event_one") def listen_two(x, y): pass is_not_( self.Target.dispatch._empty_listener_reg[self.Target]["event_one"], t1.dispatch.event_one, ) is_( self.Target.dispatch._empty_listener_reg[self.Target]["event_one"], t2.dispatch.event_one, )
def test_reconnect(self): """test that an 'is_disconnect' condition will invalidate the connection, and additionally dispose the previous connection pool and recreate.""" db_pool = self.db.pool # make a connection conn = self.db.connect() # connection works conn.execute(select([1])) # create a second connection within the pool, which we'll ensure # also goes away conn2 = self.db.connect() conn2.close() # two connections opened total now assert len(self.dbapi.connections) == 2 # set it to fail self.dbapi.shutdown() assert_raises( tsa.exc.DBAPIError, conn.execute, select([1]) ) # assert was invalidated assert not conn.closed assert conn.invalidated # close shouldnt break conn.close() is_not_(self.db.pool, db_pool) # ensure all connections closed (pool was recycled) eq_( [c.close.mock_calls for c in self.dbapi.connections], [[call()], [call()]] ) conn = self.db.connect() conn.execute(select([1])) conn.close() eq_( [c.close.mock_calls for c in self.dbapi.connections], [[call()], [call()], []] )
def test_chained_add_operator(self): User = self.classes.User session = Session() l1 = lambda: session.query(User) l2 = lambda q: q.filter(User.name == bindparam('name')) q1 = self.bakery(l1) q2 = q1 + l2 is_not_(q2, q1) self._assert_cache_key(q1._cache_key, [l1]) self._assert_cache_key(q2._cache_key, [l1, l2])
def test_generative_cache_key_regen(self): t1 = table("t1", column("a"), column("b")) s1 = select([t1]) ck1 = s1._generate_cache_key() s2 = s1.where(t1.c.a == 5) ck2 = s2._generate_cache_key() ne_(ck1, ck2) is_not_(ck1, None) is_not_(ck2, None)
def test_chained_add(self): User = self.classes.User session = Session() def l1(): return session.query(User) def l2(q): return q.filter(User.name == bindparam("name")) q1 = self.bakery(l1) q2 = q1.with_criteria(l2) is_not_(q2, q1) self._assert_cache_key(q1._cache_key, [l1]) self._assert_cache_key(q2._cache_key, [l1, l2])
def test_chained_add(self): User = self.classes.User session = Session() l1 = lambda: session.query(User) l2 = lambda q: q.filter(User.name == bindparam('name')) q1 = self.bakery(l1) q2 = q1.with_criteria(l2) is_not_(q2, q1) self._assert_cache_key( q1._cache_key, [l1] ) self._assert_cache_key( q2._cache_key, [l1, l2] )
def test_chained_add_operator(self): User = self.classes.User session = Session() def l1(): return session.query(User) def l2(q): return q.filter(User.name == bindparam('name')) q1 = self.bakery(l1) q2 = q1 + l2 is_not_(q2, q1) self._assert_cache_key( q1._cache_key, [l1] ) self._assert_cache_key( q2._cache_key, [l1, l2] )
def test_to_metadata(self): comp1 = Computed("x + 2") m = MetaData() t = Table("t", m, Column("x", Integer), Column("y", Integer, comp1)) is_(comp1.column, t.c.y) is_(t.c.y.server_onupdate, comp1) is_(t.c.y.server_default, comp1) m2 = MetaData() t2 = t.to_metadata(m2) comp2 = t2.c.y.server_default is_not_(comp1, comp2) is_(comp1.column, t.c.y) is_(t.c.y.server_onupdate, comp1) is_(t.c.y.server_default, comp1) is_(comp2.column, t2.c.y) is_(t2.c.y.server_onupdate, comp2) is_(t2.c.y.server_default, comp2)
def test_generative_cache_key_regen_w_del(self): t1 = table("t1", column("a"), column("b")) s1 = select([t1]) ck1 = s1._generate_cache_key() s2 = s1.where(t1.c.a == 5) del s1 # there is now a good chance that id(s3) == id(s1), make sure # cache key is regenerated s3 = s2.order_by(t1.c.b) ck3 = s3._generate_cache_key() ne_(ck1, ck3) is_not_(ck1, None) is_not_(ck3, None)
def test_to_metadata(self): identity1 = Identity("by default", on_null=True, start=123) m = MetaData() t = Table("t", m, Column("x", Integer), Column("y", Integer, identity1)) is_(identity1.column, t.c.y) # is_(t.c.y.server_onupdate, identity1) is_(t.c.y.server_default, identity1) m2 = MetaData() t2 = t.to_metadata(m2) identity2 = t2.c.y.server_default is_not_(identity1, identity2) is_(identity1.column, t.c.y) # is_(t.c.y.server_onupdate, identity1) is_(t.c.y.server_default, identity1) is_(identity2.column, t2.c.y) # is_(t2.c.y.server_onupdate, identity2) is_(t2.c.y.server_default, identity2)
def test_autoincrement(self): Table( "ai_1", metadata, Column("int_y", Integer, primary_key=True, autoincrement=True), Column("int_n", Integer, DefaultClause("0"), primary_key=True), ) Table( "ai_2", metadata, Column("int_y", Integer, primary_key=True, autoincrement=True), Column("int_n", Integer, DefaultClause("0"), primary_key=True), ) Table( "ai_3", metadata, Column("int_n", Integer, DefaultClause("0"), primary_key=True), Column("int_y", Integer, primary_key=True, autoincrement=True), ) Table( "ai_4", metadata, Column("int_n", Integer, DefaultClause("0"), primary_key=True), Column("int_n2", Integer, DefaultClause("0"), primary_key=True), ) Table( "ai_5", metadata, Column("int_y", Integer, primary_key=True, autoincrement=True), Column("int_n", Integer, DefaultClause("0"), primary_key=True), ) Table( "ai_6", metadata, Column("o1", String(1), DefaultClause("x"), primary_key=True), Column("int_y", Integer, primary_key=True, autoincrement=True), ) Table( "ai_7", metadata, Column("o1", String(1), DefaultClause("x"), primary_key=True), Column("o2", String(1), DefaultClause("x"), primary_key=True), Column("int_y", Integer, autoincrement=True, primary_key=True), ) Table( "ai_8", metadata, Column("o1", String(1), DefaultClause("x"), primary_key=True), Column("o2", String(1), DefaultClause("x"), primary_key=True), ) metadata.create_all() table_names = [ "ai_1", "ai_2", "ai_3", "ai_4", "ai_5", "ai_6", "ai_7", "ai_8", ] mr = MetaData(testing.db) for name in table_names: tbl = Table(name, mr, autoload=True) tbl = metadata.tables[name] # test that the flag itself reflects appropriately for col in tbl.c: if "int_y" in col.name: is_(col.autoincrement, True) is_(tbl._autoincrement_column, col) else: eq_(col.autoincrement, "auto") is_not_(tbl._autoincrement_column, col) # mxodbc can't handle scope_identity() with DEFAULT VALUES if testing.db.driver == "mxodbc": eng = [ engines.testing_engine( options={"implicit_returning": True} ) ] else: eng = [ engines.testing_engine( options={"implicit_returning": False} ), engines.testing_engine( options={"implicit_returning": True} ), ] for counter, engine in enumerate(eng): engine.execute(tbl.insert()) if "int_y" in tbl.c: assert engine.scalar(select([tbl.c.int_y])) == counter + 1 assert ( list(engine.execute(tbl.select()).first()).count( counter + 1 ) == 1 ) else: assert 1 not in list(engine.execute(tbl.select()).first()) engine.execute(tbl.delete())
def test_all_import(self): for package in self._all_dialect_packages(): for item_name in package.__all__: is_not_(None, getattr(package, item_name))
def test_autoincrement(self): Table( "ai_1", metadata, Column("int_y", Integer, primary_key=True, autoincrement=True), Column("int_n", Integer, DefaultClause("0"), primary_key=True), ) Table( "ai_2", metadata, Column("int_y", Integer, primary_key=True, autoincrement=True), Column("int_n", Integer, DefaultClause("0"), primary_key=True), ) Table( "ai_3", metadata, Column("int_n", Integer, DefaultClause("0"), primary_key=True), Column("int_y", Integer, primary_key=True, autoincrement=True), ) Table( "ai_4", metadata, Column("int_n", Integer, DefaultClause("0"), primary_key=True), Column("int_n2", Integer, DefaultClause("0"), primary_key=True), ) Table( "ai_5", metadata, Column("int_y", Integer, primary_key=True, autoincrement=True), Column("int_n", Integer, DefaultClause("0"), primary_key=True), ) Table( "ai_6", metadata, Column("o1", String(1), DefaultClause("x"), primary_key=True), Column("int_y", Integer, primary_key=True, autoincrement=True), ) Table( "ai_7", metadata, Column("o1", String(1), DefaultClause("x"), primary_key=True), Column("o2", String(1), DefaultClause("x"), primary_key=True), Column("int_y", Integer, autoincrement=True, primary_key=True), ) Table( "ai_8", metadata, Column("o1", String(1), DefaultClause("x"), primary_key=True), Column("o2", String(1), DefaultClause("x"), primary_key=True), ) metadata.create_all() table_names = [ "ai_1", "ai_2", "ai_3", "ai_4", "ai_5", "ai_6", "ai_7", "ai_8", ] mr = MetaData(testing.db) for name in table_names: tbl = Table(name, mr, autoload=True) tbl = metadata.tables[name] # test that the flag itself reflects appropriately for col in tbl.c: if "int_y" in col.name: is_(col.autoincrement, True) is_(tbl._autoincrement_column, col) else: eq_(col.autoincrement, "auto") is_not_(tbl._autoincrement_column, col) # mxodbc can't handle scope_identity() with DEFAULT VALUES if testing.db.driver == "mxodbc": eng = [ engines.testing_engine( options={"implicit_returning": True}) ] else: eng = [ engines.testing_engine( options={"implicit_returning": False}), engines.testing_engine( options={"implicit_returning": True}), ] for counter, engine in enumerate(eng): with engine.begin() as conn: conn.execute(tbl.insert()) if "int_y" in tbl.c: eq_( conn.execute(select([tbl.c.int_y])).scalar(), counter + 1, ) assert (list(conn.execute( tbl.select()).first()).count(counter + 1) == 1) else: assert 1 not in list( conn.execute(tbl.select()).first()) conn.execute(tbl.delete())
def test_autoincrement(self): Table( 'ai_1', metadata, Column('int_y', Integer, primary_key=True, autoincrement=True), Column( 'int_n', Integer, DefaultClause('0'), primary_key=True)) Table( 'ai_2', metadata, Column('int_y', Integer, primary_key=True, autoincrement=True), Column('int_n', Integer, DefaultClause('0'), primary_key=True)) Table( 'ai_3', metadata, Column('int_n', Integer, DefaultClause('0'), primary_key=True), Column('int_y', Integer, primary_key=True, autoincrement=True)) Table( 'ai_4', metadata, Column('int_n', Integer, DefaultClause('0'), primary_key=True), Column('int_n2', Integer, DefaultClause('0'), primary_key=True)) Table( 'ai_5', metadata, Column('int_y', Integer, primary_key=True, autoincrement=True), Column('int_n', Integer, DefaultClause('0'), primary_key=True)) Table( 'ai_6', metadata, Column('o1', String(1), DefaultClause('x'), primary_key=True), Column('int_y', Integer, primary_key=True, autoincrement=True)) Table( 'ai_7', metadata, Column('o1', String(1), DefaultClause('x'), primary_key=True), Column('o2', String(1), DefaultClause('x'), primary_key=True), Column('int_y', Integer, autoincrement=True, primary_key=True)) Table( 'ai_8', metadata, Column('o1', String(1), DefaultClause('x'), primary_key=True), Column('o2', String(1), DefaultClause('x'), primary_key=True)) metadata.create_all() table_names = ['ai_1', 'ai_2', 'ai_3', 'ai_4', 'ai_5', 'ai_6', 'ai_7', 'ai_8'] mr = MetaData(testing.db) for name in table_names: tbl = Table(name, mr, autoload=True) tbl = metadata.tables[name] # test that the flag itself reflects appropriately for col in tbl.c: if 'int_y' in col.name: is_(col.autoincrement, True) is_(tbl._autoincrement_column, col) else: eq_(col.autoincrement, 'auto') is_not_(tbl._autoincrement_column, col) # mxodbc can't handle scope_identity() with DEFAULT VALUES if testing.db.driver == 'mxodbc': eng = \ [engines.testing_engine(options={ 'implicit_returning': True})] else: eng = \ [engines.testing_engine(options={ 'implicit_returning': False}), engines.testing_engine(options={ 'implicit_returning': True})] for counter, engine in enumerate(eng): engine.execute(tbl.insert()) if 'int_y' in tbl.c: assert engine.scalar(select([tbl.c.int_y])) \ == counter + 1 assert list( engine.execute(tbl.select()).first()).\ count(counter + 1) == 1 else: assert 1 \ not in list(engine.execute(tbl.select()).first()) engine.execute(tbl.delete())
def go(): a = q.filter(addresses.c.id == 1).one() is_not_(a.user, None) u1 = sess.query(User).get(7) is_(a.user, u1)
def _run_cache_key_fixture(self, fixture, compare_values): case_a = fixture() case_b = fixture() for a, b in itertools.combinations_with_replacement( range(len(case_a)), 2 ): if a == b: a_key = case_a[a]._generate_cache_key() b_key = case_b[b]._generate_cache_key() is_not_(a_key, None) is_not_(b_key, None) eq_(a_key.key, b_key.key) eq_(hash(a_key), hash(b_key)) for a_param, b_param in zip( a_key.bindparams, b_key.bindparams ): assert a_param.compare( b_param, compare_values=compare_values ) else: a_key = case_a[a]._generate_cache_key() b_key = case_b[b]._generate_cache_key() if a_key.key == b_key.key: for a_param, b_param in zip( a_key.bindparams, b_key.bindparams ): if not a_param.compare( b_param, compare_values=compare_values ): break else: # this fails unconditionally since we could not # find bound parameter values that differed. # Usually we intended to get two distinct keys here # so the failure will be more descriptive using the # ne_() assertion. ne_(a_key.key, b_key.key) else: ne_(a_key.key, b_key.key) # ClauseElement-specific test to ensure the cache key # collected all the bound parameters if isinstance(case_a[a], ClauseElement) and isinstance( case_b[b], ClauseElement ): assert_a_params = [] assert_b_params = [] visitors.traverse_depthfirst( case_a[a], {}, {"bindparam": assert_a_params.append} ) visitors.traverse_depthfirst( case_b[b], {}, {"bindparam": assert_b_params.append} ) # note we're asserting the order of the params as well as # if there are dupes or not. ordering has to be # deterministic and matches what a traversal would provide. # regular traverse_depthfirst does produce dupes in cases # like # select([some_alias]). # select_from(join(some_alias, other_table)) # where a bound parameter is inside of some_alias. the # cache key case is more minimalistic eq_( sorted(a_key.bindparams, key=lambda b: b.key), sorted( util.unique_list(assert_a_params), key=lambda b: b.key ), ) eq_( sorted(b_key.bindparams, key=lambda b: b.key), sorted( util.unique_list(assert_b_params), key=lambda b: b.key ), )