def testKernelRegularizerWithReuse(self): regularizer = lambda x: tf.reduce_sum(x) * 1e-3 inputs = tf.random.uniform((5, 3), seed=1) _ = core_layers.dense(inputs, 2, name="my_dense", kernel_regularizer=regularizer) self.assertEqual( len( tf.compat.v1.get_collection( tf.compat.v1.GraphKeys.REGULARIZATION_LOSSES)), 1, ) _ = core_layers.dense( inputs, 2, name="my_dense", kernel_regularizer=regularizer, reuse=True, ) self.assertEqual( len( tf.compat.v1.get_collection( tf.compat.v1.GraphKeys.REGULARIZATION_LOSSES)), 1, )
def testFunctionalDenseTwiceReuse(self): with self.cached_session(): inputs = tf.random.uniform((5, 3), seed=1) core_layers.dense(inputs, 2, name='my_dense') vars1 = tf.compat.v1.trainable_variables() core_layers.dense(inputs, 2, name='my_dense', reuse=True) vars2 = tf.compat.v1.trainable_variables() self.assertEqual(vars1, vars2)
def testFunctionalDenseTwice(self): inputs = tf.random.uniform((5, 3), seed=1) core_layers.dense(inputs, 2) vars1 = _get_variable_dict_from_varstore().values() core_layers.dense(inputs, 2) vars2 = _get_variable_dict_from_varstore().values() self.assertEqual(len(vars1), 2) self.assertEqual(len(vars2), 4)
def forward_pass(self, inputs, training=None): out = core_layers.dense(inputs, self.units, name="dense_one", kernel_initializer=tf.compat.v1.ones_initializer(), kernel_regularizer="l2") with tf.compat.v1.variable_scope("nested_scope"): out = core_layers.dense( out, self.units, name="dense_two", kernel_initializer=tf.compat.v1.ones_initializer(), kernel_regularizer="l2") return out
def testFunctionalDenseWithCustomGetter(self): called = [0] def custom_getter(getter, *args, **kwargs): called[0] += 1 return getter(*args, **kwargs) with tf.compat.v1.variable_scope('test', custom_getter=custom_getter): inputs = tf.random.uniform((5, 3), seed=1) core_layers.dense(inputs, 2) self.assertEqual(called[0], 2)
def testFunctionalDenseInitializerFromScope(self): with tf.compat.v1.variable_scope( 'scope', initializer=tf.compat.v1.ones_initializer()), self.cached_session(): inputs = tf.random.uniform((5, 3), seed=1) core_layers.dense(inputs, 2) self.evaluate(tf.compat.v1.global_variables_initializer()) weights = _get_variable_dict_from_varstore() self.assertEqual(len(weights), 2) # Check that the matrix weights got initialized to ones (from scope). self.assertAllClose(weights['scope/dense/kernel'].read_value(), np.ones((3, 2))) # Check that the bias still got initialized to zeros. self.assertAllClose(weights['scope/dense/bias'].read_value(), np.zeros( (2)))
def testFunctionalDense(self): with self.cached_session(): inputs = tf.random.uniform((5, 3), seed=1) outputs = core_layers.dense( inputs, 2, activation=tf.nn.relu, name='my_dense') self.assertEqual( len(tf.compat.v1.get_collection(tf.compat.v1.GraphKeys.TRAINABLE_VARIABLES)), 2) self.assertEqual(outputs.op.name, 'my_dense/Relu')
def testFunctionalDenseInScope(self): with self.cached_session(): with tf.compat.v1.variable_scope('test'): inputs = tf.random.uniform((5, 3), seed=1) core_layers.dense(inputs, 2, name='my_dense') var_dict = _get_variable_dict_from_varstore() var_key = 'test/my_dense/kernel' self.assertEqual(var_dict[var_key].name, '%s:0' % var_key) with tf.compat.v1.variable_scope('test1') as scope: inputs = tf.random.uniform((5, 3), seed=1) core_layers.dense(inputs, 2, name=scope) var_dict = _get_variable_dict_from_varstore() var_key = 'test1/kernel' self.assertEqual(var_dict[var_key].name, '%s:0' % var_key) with tf.compat.v1.variable_scope('test2'): inputs = tf.random.uniform((5, 3), seed=1) core_layers.dense(inputs, 2) var_dict = _get_variable_dict_from_varstore() var_key = 'test2/dense/kernel' self.assertEqual(var_dict[var_key].name, '%s:0' % var_key)
def forward_pass(self, inputs, training=None): if training: out = core_layers.dense(inputs, self.units, name="dense_training") else: out = core_layers.dense(inputs, self.units, name="dense_no_training") return out