def main(): print('Loading APC2015berkeley...') dataset = load_APC2015berkeley() N = len(dataset.target) print(''' N: {0} dataset: {1}'''.format(N, dataset)) bg_label = dataset.target_names.index('__background__') n_labels = len(dataset.target_names) max_batch_size = 10 for i in xrange(0, N, max_batch_size): t_start = time.time() fname_batch = dataset.filenames[i:i+max_batch_size] label_batch = dataset.target[i:i+max_batch_size] blob_batch, bbox_batch, label_batch, roi_delta_batch =\ load_batch_APC2015berkeley(fname_batch, label_batch, bg_label, n_labels) # show stats elapsed_time = time.time() - t_start head_fname_batch = ['/'.join(f.split('/')[-2:]) for f in fname_batch] print(''' elapsed_time: {0} [s] fnames: {1} blob: {2} bboxes: {3} label: {4} roi_delta: {5}'''.format(elapsed_time, head_fname_batch, blob_batch.shape, bbox_batch.shape, label_batch.shape, roi_delta_batch.shape))
def batch_loop_train(self, batch_size): N = len(self.train_fnames) random_index = np.random.randint(0, N, N) for i in xrange(0, N, batch_size): batch_index = random_index[i:i+batch_size] batch_fnames = self.train_fnames[batch_index] batch_labels = self.train_target[batch_index] blobs, bboxes, t_labels, t_bboxes = load_batch_APC2015berkeley( fnames=batch_fnames, labels=batch_labels, bg_label=self.dataset.background_label, n_labels=len(self.dataset.target_names), ) blobs, bboxes, t_labels, t_bboxes =\ map(cuda.to_gpu, [blobs, bboxes, t_labels, t_bboxes]) self.optimizer.zero_grads() volatile = 'OFF' blobs = Variable(blobs, volatile=volatile) bboxes = Variable(bboxes, volatile=volatile) t_labels = Variable(t_labels, volatile=volatile) t_bboxes = Variable(t_bboxes, volatile=volatile) loss = self.model(blobs, bboxes, (t_labels, t_bboxes), train=True) loss.backward() self.optimizer.update()
def batch_loop_test(self, batch_size): sum_loss = 0 N = len(self.test_fnames) random_index = np.random.randint(0, N, N) for i in xrange(0, N, batch_size): batch_index = random_index[i:i+batch_size] batch_fnames = self.test_fnames[batch_index] batch_labels = self.test_target[batch_index] blobs, bboxes, t_labels, t_bboxes = load_batch_APC2015berkeley( fnames=batch_fnames, labels=batch_labels, bg_label=self.dataset.background_label, n_labels=len(self.dataset.target_names), ) loss = self.model(blobs, bboxes, (t_labels, t_bboxes), train=False) sum_loss += loss.data print(loss.data / batch_size) self.sum_loss.append(sum_loss / N)