def test_rknn(): from rknn.api import RKNN rknn = RKNN() # Load rknn model print('--> Load RKNN model') ret = rknn.load_rknn('lprnet.rknn') if ret != 0: print('Export model failed!') exit(ret) # init runtime environment print('--> Init runtime environment') ret = rknn.init_runtime(target='rk1808') if ret != 0: print('Init runtime environment failed') exit(ret) print('done') # Inference print('--> Running model') image = cv2.imread('data/eval/000256.png') outputs = rknn.inference(inputs=[image]) preds = outputs[0] labels, pred_labels = decode(preds, CHARS) print(labels) print('done') rknn.release()
class RecModel(): def __init__(self,args): self.model_path = args.model_path self.input_size = args.image_size self.threshold = args.threshold self.rknn = RKNN() self.load_model() def load_model(self): ret = self.rknn.load_rknn(self.model_path) if ret != 0: print('load rknn model failed') exit(ret) print('load model success') ret = self.rknn.init_runtime(target="rk3399pro", device_id="TD033101190400338") if ret != 0: print('Init runtime environment failed') exit(ret) print('init runtime success') version = self.rknn.get_sdk_version() print(version) # Inference print('--> Running model') def extract_features(self,img): if img.shape[0] > 112: img = cv2.resize(img, (112, 112), interpolation=cv2.INTER_AREA) if img.shape[0] < 112: img = cv2.resize(img, (112, 112), interpolation=cv2.INTER_CUBIC) img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) outputs = self.rknn.inference(inputs=[img])[0] embedding = preprocessing.normalize(outputs).flatten() return embedding def load_facebank(self): self.features = np.load('npy/facebank_mtcnn_rknn_128.npy') self.names = np.load('npy/names_mtcnn_rknn_128.npy') def compare_feature(self,img): feature = self.extract_features(img) diff = np.expand_dims(feature,2) - np.expand_dims(np.transpose(self.features,[1,0]), 0) dist = np.sum(np.power(diff, 2),axis=1) minimum = np.min(dist, axis=1) min_idx = np.argmin(dist,axis=1) min_idx[minimum > self.threshold] = -1 # if no match, set idx to -1 if min_idx == -1: return (np.array([['None']]),np.array([0])) else: return self.names[min_idx], minimum
def main(): #create RKNN object rknn = RKNN(verbose=True) #Direct Load RKNN Model rknn.load_rknn('./emotion.rknn') #만들어진 rknn을 로드 print('--> load success') #성공 메세지 출력 result = None #이미지 읽기 input_image = cv2.imread('./data/image/happy.jpg', cv2.IMREAD_COLOR) #esize한 이미지, 가장 큰 얼굴 object detected_face, face_coor = format_image(input_image) #탐지된 이미지가 있다면, if detected_face is not None: #image를 tenxor로 변환 & float32로 변환 (rknn이 float64는 지원하지 않음) "tensor 사이즈는 (1,2304), detected_face는 48X48" tensor = image_to_tensor(detected_face).astype(np.float32) #init runtime environment print('--> Init runtime environment') ret = rknn.init_runtime() #오류 메세지 출력 if ret != 0: print('Init runtime environment failed') #rknn 모델 실행 result = rknn.inference(inputs=[tensor]) print('run success') #list를 array로 변환 #result는 감정 예측 배열 result = np.array(result) #result가 존재하면 if result is not None: #감정 배열이 7개의 값을 가지므로 range(7)의 범위를 가짐 for i in range(7): #감정 배열 중 1인 값이 있다면, if result[0][0][i] == 1: #감정 예측 메세지 출력 print('당신의 감정은 ' + EMOTIONS[i] + '입니다.')
def __deal(self, model, post_func): rknn = RKNN() ret = rknn.load_rknn(path=model) # init runtime environment logger.debug('--> Init runtime environment') ret = rknn.init_runtime() if ret != 0: logger.error('Init runtime environment failed') exit(ret) logger.debug('Init done') r_list = [self.rfd] w_list = [self.wfd] e_list = [self.rfd, self.wfd] while True: fd_r_list, fd_w_list, fd_e_list = select.select( r_list, w_list, e_list, select_timeout) if not (fd_r_list or fd_w_list or fd_e_list): continue for rs in fd_r_list: if rs is self.rfd: decimg = self.__recieve_frame() # logger.debug('__recieve_frame: %d' % (len(decimg))) if decimg is None: logger.error('decimg is None') continue outputs = rknn.inference(inputs=[decimg]) data = post_func(outputs) for ws in fd_w_list: if ws is self.wfd: self.__send_result(data) for es in fd_e_list: logger.error("error fd list: %s" % (es)) rknn.release() logger.debug('__deal finish')
#visible_input = cv2.cvtColor(visible_input_temp, cv2.COLOR_BGR2RGB) visible_input = cv2.resize(visible_input_temp, (INPUT_SIZE_WIDTH, INPUT_SIZE_HEIGHT), interpolation=cv2.INTER_CUBIC) # init runtime environment print('--> Init runtime environment') ret = rknn.init_runtime() if ret != 0: print('Init runtime environment failed') exit(ret) print('done') # Inference print('--> Running model') outputs = rknn.inference(inputs=[visible_input, infrared_input]) print('done') print('inference result: ', outputs) img = np.array(outputs[0]) print(img.shape) print('img.min = ') print(np.min(img)) #img=img-np.min(img) print(np.max(img)) img = img / np.max(img) * 255 img = np.reshape(img, (INPUT_SIZE_WIDTH, INPUT_SIZE_HEIGHT)) print(img.shape) print(visible_input.shape)
input_low_eval = np.expand_dims(input_low, axis=0) # (1, 400, 600, 3) sample_dir = './results/test/' if not os.path.isdir(sample_dir): os.makedirs(sample_dir) # DecomNet 실행 print('-> Running model 1') ret1 = rknn.init_runtime() if ret1 != 0: print('Init runtime environment 1 failed') exit(ret1) print('done') print('-> Inference model 1') decom_r_low, decom_i_low = rknn.inference(inputs=[input_low_eval ]) # decom_r_low(3채널) #print(type(decom_r_low)) #numpy.ndarray print('=> 1 run success') #image save (성공) save_images(os.path.join(sample_dir, 'decom_r_low.png'), decom_r_low) save_images(os.path.join(sample_dir, 'decom_i_low.png'), decom_i_low) #------------------------------------------------------------------------------------------------------------ # Load RKNN Model print('--> Load RKNN model 2') ret2 = rknn.load_rknn('./2.rknn') print('done') # RestorationNet 실행 print('--> Running model 2')
#runtime 환경 init print('---------------------------------------> Init runtime environment') ret = rknn.init_runtime() #오류 메세지 출력 if ret != 0: print('---------------------------------> Init runtime environment failed') i = 0 while(cv2.waitKey(10) != ord('q')): full_image = scipy.misc.imread("driving_data/" + str(i) + ".jpg", mode="RGB") #이미지 불러오기 data는 기본 dataset은 2018 영상 image = scipy.misc.imresize(full_image[-150:], [66, 200]) / 255.0 #이미지 크기 조정 image = image.astype('float32') #이미지 dtype을 float64 ----> float32 degrees = rknn.inference(inputs= image) #sess.run은 텐서플로우에서 돌리는 함수고 rknn에서는 rknn.inference로 돌려야 한다 #degrees가 list형식으로 출력되기 때문에 arctan같은 연산을 위해서는 float으로 형식 바꿔줘야 한다 for j in degrees: d = float(j) #각도 출력을 위해 아크탄젠트 연산 d = d * 180 / scipy.pi call("clear") print("Predicted steering angle: " + str(d) + " degrees") cv2.imshow("frame", cv2.cvtColor(full_image, cv2.COLOR_RGB2BGR)) #영상 GUI창으로 확인 i += 1
print('load success') #rknn 런타임 실행 print('--> Init runtime environment') ret = rknn.init_runtime() if ret != 0: print('Init runtime environment failed') #모델 실행 print('--> Running model') LSTM_array = [] #데이터 갯수만큼 실행 for i in range(Y_test.shape[0]): outputs = rknn.inference(inputs=[X_test[i]]) outputs = np.array(outputs) LSTM_array.append(outputs[0][0]) print('현재: ', i, '/', Y_test.shape[0]) LSTM_array = np.array(LSTM_array) ARIMA_array = [] #ARIMA 실행 for i in range(Y_test.shape[0]): rows = 838 - 243 + i series = pd.read_csv('project.csv', header=0, nrows=rows, index_col=0, squeeze=True)
ret = rknn.init_runtime() b = time.time() if ret != 0: print('init runtime failed.') exit(ret) print('done %fs' % (b - a)) if NEED_RUN_PIC: img = cv2.imread(PIC_PATH) # BGR img_input = cv2.resize(img, (SIZE_W, SIZE_H)) img_input = np.transpose(img_input, (2, 0, 1)) print("Input shape: ", img_input.shape) # inference print('--> inference') a = time.time() outputs = rknn.inference(inputs=[img_input], data_format='nchw') b = time.time() print('done %f' % (b - a)) input0_data = outputs[0] input1_data = outputs[1] input2_data = outputs[2] if (SIZE_W, SIZE_H) == (416, 256): input0_data = input0_data.reshape(3, 11, 8, 13) input1_data = input1_data.reshape(3, 11, 16, 26) input2_data = input2_data.reshape(3, 11, 32, 52) else: input0_data = input0_data.reshape(3, 11, 13, 13) input1_data = input1_data.reshape(3, 11, 26, 26) input2_data = input2_data.reshape(3, 11, 52, 52)
exit(ret) print('done') # Inference print('--> Running model') img = cv2.imread('./test3.jpg') h = img.shape[0] w = img.shape[1] img_matlab = img.copy() img_matlab = cv2.cvtColor(img_matlab, cv2.COLOR_BGR2RGB) img_matlab = cv2.resize(img_matlab, (270, 207)) # default is bilinear img_matlab = np.swapaxes(img_matlab, 0, 2) print("1111111111111") outputs = rknn.inference(inputs=[img_matlab], data_format='nchw') print("2222222222222") #show_outputs(outputs) out_prob1 = outputs[1] out_conv4_2 = outputs[0] out_prob1 = out_prob1.reshape(1, 2, 130, 99) out_conv4_2 = out_conv4_2.reshape(1, 4, 130, 99) total_boxes = np.zeros((0, 9), np.float) #prob1=np.load('prob1.npy') #conv4=np.load('conv4.npy') print("out_prob1 ", out_prob1) print("out_conv4_2 ", out_conv4_2) boxes = generateBoundingBox(out_prob1[0, 1, :, :], out_conv4_2[0], 0.6, 0.6) print("boxes ", boxes)
def main(): with open(yaml_file, 'r') as F: config = yaml.load(F) # print('config is:') # print(config) model_type = config['running']['model_type'] print('model_type is {}'.format(model_type)) rknn = RKNN(verbose=True) print('--> config model') rknn.config(**config['config']) print('done') print('--> Loading model') load_function = getattr(rknn, _model_load_dict[model_type]) ret = load_function(**config['parameters'][model_type]) if ret != 0: print('Load mobilenet_v2 failed! Ret = {}'.format(ret)) exit(ret) print('done') #### # print('hybrid_quantization') # ret = rknn.hybrid_quantization_step1(dataset=config['build']['dataset']) if model_type != 'rknn': print('--> Building model') ret = rknn.build(**config['build']) if ret != 0: print('Build mobilenet_v2 failed!') exit(ret) else: print('--> skip Building model step, cause the model is already rknn') if config['running']['export'] is True: print('--> Export RKNN model') ret = rknn.export_rknn(**config['export_rknn']) if ret != 0: print('Init runtime environment failed') exit(ret) else: print('--> skip Export model') if (config['running']['inference'] is True) or (config['running']['eval_perf'] is True): print('--> Init runtime environment') ret = rknn.init_runtime(**config['init_runtime']) if ret != 0: print('Init runtime environment failed') exit(ret) print('--> load img') img = cv2.imread(config['img']['path']) print('img shape is {}'.format(img.shape)) # img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) inputs = [img] if config['running']['inference'] is True: print('--> Running model') config['inference']['inputs'] = inputs #print(config['inference']) outputs = rknn.inference(inputs) #outputs = rknn.inference(config['inference']) print('len of output {}'.format(len(outputs))) print('outputs[0] shape is {}'.format(outputs[0].shape)) print(outputs[0][0][0:2]) else: print('--> skip inference') if config['running']['eval_perf'] is True: print('--> Begin evaluate model performance') config['inference']['inputs'] = inputs perf_results = rknn.eval_perf(inputs=[img]) else: print('--> skip eval_perf') else: print('--> skip inference') print('--> skip eval_perf')
import cv2 import numpy as np from rknn.api import RKNN if __name__ == '__main__': rknn = RKNN(verbose=False) rknn.load_rknn('wxb.rknn') rknn.init_runtime() print("init runtime done") input_data = np.array([10.,], dtype='float32') output = rknn.inference(inputs=input_data) print(output) rknn.release() print('Done')
def run_ssd(img_path,priorbox_path): #caffe_proto="./MobileNetSSD_deploy.prototxt" caffe_proto= "./MobileNetSSD_deploy_truncated.prototxt" caffe_weight="./MobileNetSSD_deploy10695.caffemodel" rknn_model="./pedestrian_ssd.rknn" caffe2rknn(caffe_proto,caffe_weight,rknn_model) print("run ssd") rknn=RKNN(verbose=True) ret=rknn.load_rknn(path=rknn_model) ret=rknn.init_runtime() #ret = rknn.init_runtime(target='rk1808', device_id='012345789AB') img=cv2.imread(img_path) img=cv2.resize(img,(300,300)) print("shape:",img.shape) outlen=7668 #change to your model priorbox=[] with open(priorbox_path) as f: for line in f: arr=line.strip().split(",") priorbox=list(map(float,arr)) priorbox=np.reshape(np.array(priorbox),(2,outlen)) outputs = rknn.inference(inputs=[img])#,data_format="nchw",data_type="float32" print("pb:",priorbox.shape,priorbox) print("loc:",outputs[0].shape,outputs[0]) print("conf:",outputs[1].shape,outputs[1]) NUM_RESULTS=outlen//4 NUM_CLASSES=2 box_priors= priorbox[0].reshape((NUM_RESULTS,4)) box_var = priorbox[1].reshape((NUM_RESULTS,4)) loc = outputs[0].reshape((NUM_RESULTS, 4)) conf = outputs[1].reshape((NUM_RESULTS, NUM_CLASSES)) #compute softmax conf = [[x/(x+y),y/(x+y)] for x,y in np.exp(conf)] # Post Process for i in range(0, NUM_RESULTS): pb = box_priors[i] lc = loc[i] var= box_var[i] pb_w = pb[2] - pb[0] pb_h = pb[3] - pb[1] pb_cx = (pb[0] + pb[2]) * 0.5; pb_cy = (pb[1] + pb[3]) * 0.5; bbox_cx = var[0] * lc[0] * pb_w + pb_cx; bbox_cy = var[1] * lc[1] * pb_h + pb_cy; bbox_w = math.exp(var[2] * lc[2]) * pb_w; bbox_h = math.exp(var[3] * lc[3]) * pb_h; xmin = bbox_cx - bbox_w * 0.5; ymin = bbox_cy - bbox_h * 0.5; xmax = bbox_cx + bbox_w * 0.5; ymax = bbox_cy + bbox_h * 0.5; xmin *= 300 #input width ymin *= 300 #input height xmax *= 300 #input width ymax *= 300 #input height score = conf[i][1]; if score > 0.9: print("score:",score) cv2.rectangle(img, (int(xmin), int(ymin)), (int(xmax), int(ymax)),(0, 0, 255), 3) plt.imshow(cv2.cvtColor(img,cv2.COLOR_RGB2BGR)) plt.show() print("ssd finished")
def main(folder="test"): folder = folder files = os.listdir(folder) for i in range(len(files)): img = cv2.imread("{}/{}".format(folder, files[i])) img = (img - 127.5) / 127.5 h, w = img.shape[:2] print("w, h = ", w, h) input = cv2.resize(img, (PRESET, PRESET), interpolation=cv2.INTER_CUBIC) input = input.reshape(PRESET, PRESET, 3) input = np.array(input, dtype=np.float32) rknn = RKNN() print('--> Loading model') #rknn.config(channel_mean_value='0 0 0 255', reorder_channel='0 1 2') # Load TensorFlow Model print('--> Loading model') rknn.load_tensorflow(tf_pb='pretrained/SR_freeze.pb', inputs=['ESRGAN_g/Conv2D'], outputs=['output_image'], input_size_list=[[PRESET, PRESET, 3]]) print('done') # Build Model print('--> Building model') rknn.build(do_quantization=False) print('done') # Export RKNN Model rknn.export_rknn('./sr_rknn.rknn') # Direct Load RKNN Model rknn.load_rknn('./sr_rknn.rknn') # init runtime environment print('--> Init runtime environment') ret = rknn.init_runtime() if ret != 0: print('Init runtime environment failed') # Inference print('--> Running model') output_image = rknn.inference(inputs=[input]) print('complete') out = np.array(output_image, dtype=np.float64) print("output_image = ", out.shape) out = np.squeeze(out) Y_ = out.reshape(PRESET * 4, PRESET * 4, 3) Y_ = cv2.resize(Y_, (w * 4, h * 4), interpolation=cv2.INTER_CUBIC) print("output shape is ", Y_.shape) #후처리 과정 Y_ = (Y_ + 1) * 127.5 cv2.imwrite("{}/{}_yval.png".format(OUT_DIR, i), Y_) # Evaluate Perf on Simulator #rknn.eval_perf() # Release RKNN Context rknn.release()
img = cv2.imread('./test2-1.jpg') img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) img = cv2.resize(img, (24, 24)) # default is bilinear img = np.swapaxes(img, 0, 2) #img = cv2.resize(img, (48,48)) # default is bilinear tempimg = np.ones((1, 3, 48, 48)) # init runtime environment #print('--> Init runtime environment') ret = rknn.init_runtime() #ret = rknn.init_runtime() if ret != 0: print('Init runtime environment failed') exit(ret) print('done') # Inference print('--> Running model') outputs = rknn.inference(inputs=[tempimg], data_format='nchw') print(outputs) print('done') # perf #print('--> Begin evaluate model performance') #perf_results = rknn.eval_perf(inputs=[img]) #print('done') rknn.release()
recent_data = np.squeeze(recent_data, axis=0) print("recent_data.shape:", recent_data.shape) print("recent_data:", recent_data) # init runtime environment print('--> Init runtime environment') ret = rknn.init_runtime() if ret != 0: print('Init runtime environment failed') for i in range(predict_days): # Inference #print('--> Running model') test_predict = rknn.inference(inputs=[recent_data]) test_predict = np.array(test_predict, dtype = np.float32) test_predict = np.squeeze(test_predict) #print("reselt = ", test_predict) #print(test_predict.shape) # Evaluate Perf on Simulator #rknn.eval_perf(inputs=[recent_data]) test_predict1 = reverse_min_max_scaling(price,test_predict[0:5]) test_predict2 = reverse_min_max_scaling(volume,test_predict[5]) real_test_predict = np.append(test_predict1, test_predict2) time = time + datetime.timedelta(days=1) str_time = time.strftime("%Y-%m-%d")
if __name__ == '__main__': rknn = RKNN(verbose=False) rknn.register_op('./truncatediv/TruncateDiv.rknnop') rknn.register_op('./exp/Exp.rknnop') rknn.load_tensorflow(tf_pb='./custom_op_math.pb', inputs=['input'], outputs=['exp_0'], input_size_list=[[1, 512]]) rknn.build(do_quantization=False) # rknn.export_rknn('./rknn_test.rknn') # rknn.load_rknn('./rknn_test.rknn') rknn.init_runtime() print("init runtime done") in_data = np.full((1, 512), 50.0) in_data = in_data.astype(dtype='float32') output = rknn.inference(inputs=[in_data]) print(output) rknn.release() pass
if ret != 0: print('Export xception.rknn failed!') exit(ret) print('done') # ret = rknn.load_rknn('./xception.rknn') # Set inputs img = cv2.imread(IMG_PATH) img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # init runtime environment print('--> Init runtime environment') ret = rknn.init_runtime() #ret = rknn.init_runtime(target='rk1808') if ret != 0: print('Init runtime environment failed') exit(ret) print('done') # Inference print('--> Running model') outputs = rknn.inference(inputs=[img]) #outputs[0].tofile('out.txt', '\n') show_outputs(outputs) print('done') rknn.release()
# [1, 98, 40]의 데이터가 들어가서 [1, 3920] 형태의 데이터가 나옵니다 out = tf.compat.v1.layers.Flatten()(output_) # 이때까지 했던 연산은 모두 Tensor 형태로 연산하는것이였는데 RKNN은 텐서를 이해하지 못합니다 # 따라서 Tensor 형태를 Numpy 형태의 배열로 변환해줍니다 numpy_data = tf.Session().run(out) print("----> data standarization complete") # 아무말도 하지않는 정적의 음성데이터를 인공적으로 가공한겁니다 silence = np.zeros([3920], dtype=np.float32) # [1, 3920] Numpy 형태로 변환 해줬던 데이터를 신경망에 넣어줍니다 test_predict = rknn.inference(inputs=[numpy_data]) test_predict = np.array(test_predict, dtype = np.float64) print('done') #print('inference result: ', test_predict) #print('result shape: ' , test_predict.shape) # 결과가 0 이라면 정적입니다 if(test_predict == 0): print("silence"); # 결과가 1 이라면 학습하지 않은 데이터 입니다 if(test_predict == 1): print("unknown"); # 결과가 2 라면 Yes 라는 단어입니다 if(test_predict == 2):