def find_best_z(self, new_point, points, lp_x, test_labels): """ Given a new point, calculate its best tagging through constructing matrix Z that for for each z_i z_j (p_i, p_j tags) calculate z_k such that: z_k = (z_i * dist(p_i, p_j) + z_j * dist(p_j,p_k)) / (dist(p_i, p_j) + dist(p_2,p_3)) """ """ V2 Get min max of """ labels_min = min(test_labels) labels_max = max(test_labels) while (): mid = (labels_min + labels_max) / 2.0 Z = [] max_jump = 0 best_z = 0 for i,z_i in enumerate(lp_x): for j,z_j in enumerate(lp_x): if z_i > z_j: z1 = z_i z2 = z_j point1 = points[i] point2 = points[j] else: z1 = z_j z2 = z_i point1 = points[j] point2 = points[i] z_k = ((z1 * distance.euclidean(point2, new_point)) + (z2 * distance.euclidean(point1, new_point))) / (distance.euclidean(point1,new_point) + distance.euclidean(point2, new_point)) # Z[i][j] = z_k # IF the jump (z1-z3 / dist(p1,p3)) is bigger then save z3 if distance.euclidean(point1,new_point) == 0: current_jump = 0 else: current_jump = ((z1 - z_k) / distance.euclidean(point1, new_point)) if (current_jump > max_jump): max_jump = current_jump best_z = z_k return best_z
def gap(data, refs=None, nrefs=20, ks=range(1,11), method=None): shape = data.shape if refs is None: tops = data.max(axis=0) bots = data.min(axis=0) dists = scipy.matrix(scipy.diag(tops-bots)) rands = scipy.random.random_sample(size=(shape[0], shape[1], nrefs)) for i in range(nrefs): rands[:, :, i] = rands[:, :, i]*dists+bots else: rands = refs gaps = scipy.zeros((len(ks),)) for (i, k) in enumerate(ks): g1 = method(n_clusters=k).fit(data) (kmc, kml) = (g1.cluster_centers_, g1.labels_) disp = sum([euclidean(data[m, :], kmc[kml[m], :]) for m in range(shape[0])]) refdisps = scipy.zeros((rands.shape[2],)) for j in range(rands.shape[2]): g2 = method(n_clusters=k).fit(rands[:, :, j]) (kmc, kml) = (g2.cluster_centers_, g2.labels_) refdisps[j] = sum([euclidean(rands[m, :, j], kmc[kml[m],:]) for m in range(shape[0])]) gaps[i] = scipy.log(scipy.mean(refdisps))-scipy.log(disp) return gaps
def predict(self, X): X = np.array(X) predictions = [self.classes_[int(euclidean(self.A1, x) > euclidean(self.B1, x))] for x in X] return predictions
def collect_mean_values(hand_idx, gt_labels, pr_labels, gt_joints, pr_joints, errors_per_joint, check_gesture_equality): """Collects regression errors for each joint. Args: hand_idx: index of hand gt_labels: ground truth labels list. pr_labels: predicted labels list. gt_joints: ground truth joints list. pr_joints: predicted joints list. errors_per_joint: keeps errors (L2) for each joints separately. check_gesture_equality: it keeps true if gesture equality check is on. """ compared = 0 total = 0 for i in xrange(len(gt_labels)): gt_label = gt_labels[i][hand_idx] pr_label = pr_labels[i][hand_idx] gt_joint = gt_joints[i][hand_idx] pr_joint = pr_joints[i][hand_idx] if len(gt_joint) > 0: total += 1 if ((not check_gesture_equality) or gt_label == pr_label) and \ len(gt_joint) > 0 and len(gt_joint) == len(pr_joint): errors_per_joint[gt_label][0].append( distance.euclidean(gt_joint[0], pr_joint[0])) errors_per_joint[gt_label][1].append( distance.euclidean(gt_joint[1], pr_joint[1])) compared += 1 return compared, total
def _pan_head_to_nearest_face(self, rects, img_shape): centers = [] screen_centers = [] for r in rects: center = (r[:2] + r[2:]) / 2 center_origin = center - (np.array(img_shape)[1::-1] / 2) centers.append(center_origin) screen_centers.append(center) if len(centers) > 0: min_center = min(centers, key=lambda x: distance.euclidean(x, [0,0])) move_scale = np.clip(distance.euclidean(min_center, [0,0]) / (self.TURN_THRESHOLD), 0.2, 1.0) new_angle = self.head.pan() if min_center[0] <= -(self.TURN_THRESHOLD / 2): new_angle -= self.TURN_SPEED * move_scale self._face_img = self._face_left_img elif min_center[0] >= (self.TURN_THRESHOLD / 2): new_angle += self.TURN_SPEED * move_scale self._face_img = self._face_right_img else: self._face_img = self._face_center_img self.head.set_pan(new_angle, timeout=0.0) return screen_centers
def getSensors(self): """ Observe the color of the nearest animat. """ nearest_prey = min( self.world.animats, key=lambda animat: \ euclidean(self.world.predator.coords, animat.coords) ) print 'Nearest prey: %s (%s)' % (nearest_prey, nearest_prey.__dict__) if euclidean(self.world.predator.coords, nearest_prey.coords) > DIST_MAX: print 'Predator is too far away from the nearest prey!' relative_color = BKGD_COLOR else: relative_color = abs(nearest_prey.color - BKGD_COLOR) print 'Relative color: %s' % relative_color sensors = [ relative_color[2] + \ relative_color[1] * (COLOR_MAX + 1) + \ relative_color[0] * (COLOR_MAX + 1) ** 2 ] print 'Sensors color: %s' % sensors return sensors
def find(self,page_dict): fl = page_dict.get("ecom_features") page_features = [float(fl[0]), float(fl[1]), float(fl[2]), float(fl[3]), float(fl[8]), float(fl[9]), float(fl[10]),float(fl[11])] page_coordinates = self.scaler.transform(page_features) print page_coordinates from scipy.spatial.distance import euclidean dist_product_cluster_center = euclidean(self.clusters_properties.get("prod")[0], page_coordinates) dist_category_cluster_center = euclidean(self.clusters_properties.get("cat")[0], page_coordinates) if dist_product_cluster_center < dist_category_cluster_center: cluster = "prod" group = self.__position_inside_cluster(dist_product_cluster_center, self.clusters_properties.get("prod")) page_dict["ecom_kmeans_dist"] = dist_product_cluster_center else: cluster = "cat" group = self.__position_inside_cluster(dist_category_cluster_center, self.clusters_properties.get("cat")) page_dict["ecom_kmeans_dist"] = dist_category_cluster_center if cluster == "prod": # and group ... page_dict["category"] = 'ecom_product' elif cluster == "cat": # and group ... page_dict["category"] = 'ecom_category'
def distance(point1, point2): x1 = point1[0] x2 = point2[0] if(rock[point2[0]][point2[1]][point2[2]] == 0): return (1 + 4*abs(x1-x2))*euclidean(point1, point2) else: return (1 +4*abs(x1-x2))*c*euclidean(point1, point2)
def __str__(self): X = np.array(self.get_points()) data = { 'size': X.shape[0], 'center': ', '.join(str(x) for x in self.center().tolist()), 'max_dist': -1 if X.shape[0] == 0 else max( euclidean(x, self.center()) for x in X.tolist() ), 'min_dist': -1 if X.shape[0] == 0 else min( euclidean(x, self.center()) for x in X.tolist() ), 'mean_dist': -1 if X.shape[0] == 0 else np.mean([ euclidean(x, self.center()) for x in X.tolist() ]), 'points': X } return self.header.format( data['center'], data['max_dist'], data['min_dist'], data['mean_dist'], data['size'], '\n'.join( ', '.join(str(x) for x in row) for row in data['points'].tolist() ) )
def compare_photos_simple(filename,photos): """ Compares a modified image to a flickr API result INPUT: File name of a modified image, dict of flickr API search result OUTPUT: True if the two images are the same, false if not Uses simple normalization: sets photos to be the same size, and grayscale """ f = os.path.basename(filename) #define the results csv line as the base file name and the flickr static url min_dist=float('inf') best_pick=None #load the file image im2=Image.open(filename) for p in photos: PHOTO='https://static.flickr.com/%s/%s_%s.jpg' % (p['server'],p['id'],p['secret']) #get the image data from Flickr r = requests.get(PHOTO) im1 = Image.open(StringIO(r.content)) #shrink both photos down to 100X100 pixels, convert to luminance values norm1,norm2 = normalize_photos(im1,im2,h=100,w=100) #if the images are already highly correlated return a line of text with the two image uri's if euclidean(norm1.flatten(),norm2.flatten()) <= min_dist: best_pick=PHOTO min_dist = euclidean(norm1.flatten(),norm2.flatten()) return f+','+best_pick+'\n'
def index_i(X, labels): k = get_k(labels) if k==2: return 0 centroids = get_centers(X, labels) center = X.mean(0) ek = 0. e1 = 0. for i, x in enumerate(X): ek += euclidean(x, centroids[labels[i]]) e1 += euclidean(x, center) pair_dist = pdist(np.vstack(centroids), 'correlation') dk = pair_dist.max() mk = pair_dist.min() i_ = (1./k * np.float(e1)/ek * dk * mk)**2 return i_
def clase_kvecinos_cercanos(k, i, c): """ Function: clase_kvecinos_cercanos Descrp: Devuelve las k clases de los k vecinos más cercanos Args: -> k: Numero de vecinos a tener en cuenta -> i: Instancia a analizar -> c: Conjunto de entrenamiento Return: -> Lista de clases de los k vecinos """ # si tienen el mismo tamaño significa que i tiene la clase incorporada if len(i) == len(c[0]): distances = [ distance.euclidean(i[:-1],v[:-1]) for v in c] else: distances = [ distance.euclidean(i,v[:-1]) for v in c] # creamos las parejas (distancia, clase) par = [] for i in range(0,len(c)): par += [(distances[i], c[i][-1])] # nos quedamos con las k instancias con distancia menor par = sorted(par)[:k] # y solamente con la clase, sin las distancias par = [p[1] for p in par] return par
def visit(self, ax, poly, poly_vor, c1, c2, x, y, splits): scatter(ax, x, y) polys = sum([h.polys for h in self._heap_visitors], []) candidates_union = cascaded_union(polys) points = {} for h in self._heap_visitors: for x, y in zip(h.x, h.y): points[(x, y)] = np.array((x, y)) points = list(points.values()) # remove duplicates points.sort(key=lambda p: distance.euclidean(self._p, p)) if self._mode == 'union': ax.plot(self._p[0], self._p[1], marker='x', zorder=99, c='red', ms=10, mew=5) draw_poly(ax, candidates_union, 'none', lw=2.0, hatch='x') elif self._mode == 'dist': ax.plot(self._p[0], self._p[1], marker='x', zorder=99, c='blue', ms=3, mew=1) for p in points: ax.plot([self._p[0], p[0]], [self._p[1], p[1]], 'r-') draw_poly(ax, candidates_union, 'none', lw=2.0) elif self._mode == 'top': ax.plot(self._p[0], self._p[1], marker='x', zorder=99, c='blue', ms=3, mew=1) for p in points[:self._nns]: ax.plot([self._p[0], p[0]], [self._p[1], p[1]], 'r-') c = plt.Circle(self._p, distance.euclidean(self._p, points[self._nns-1]), edgecolor='red', zorder=99, lw=1.0, fill=False, linestyle='dashed') ax.add_artist(c) draw_poly(ax, candidates_union, 'none', lw=2.0)
def kMeans(old_center,c): index1 = 0 for item in c2: index2 = 0 max_distance = distance.euclidean(key_min, key_max) for cent in old_center: dist = distance.euclidean(item, cent) if dist < max_distance: max_distance = dist class2[index1] = index2 index2 += 1 index1 += 1 index = 0 countnumb = np.zeros(k) for key in c2: for numb in range(0, k, 1): if class2[index] == numb: new_center[numb][0] += key[0] new_center[numb][1] += key[1] countnumb[numb] += 1 index += 1 for numb in range(0, k, 1): new_center[numb][0] /= countnumb[numb] new_center[numb][1] /= countnumb[numb] temp = 0 for numb in range(0, k, 1): if new_center[numb][0] == old_center[numb][0] and new_center[numb][1] == old_center[numb][1]: temp += 1 if temp == k: c += 1 if c == 2: return new_center, class2 else: old_center = new_center return kMeans(old_center,c)
def get_propagated_labels(t): if aHandler.animate: aHandler.frame = aHandler.frame + 1 try: Y_old = Y_t.copy() for i in range(initial_labels.shape[1]): Y_new = (Laplacian * Alpha).dot(Y_t[i]) + OneMinusAlpha.dot(Y_0[i]) #np.copyto(Y_t[i], Y_new) Y_t[i] = Y_new # class mass normalization class_prior = np.sum(initial_labels[:num_labeled_points], axis=0) / float(num_labeled_points) class_mass = np.sum(initial_labels[num_labeled_points:], axis=0) / float(initial_labels.shape[0] - num_labeled_points) class_scaling = class_prior / class_mass #np.copyto(Y_scaled, class_scaling * Y_t.T) Y_scaled[:] = class_scaling * Y_t.T color_map = generate_color_map(Y_scaled.T, num_labeled_points, num_soft_labeled_points) #color_map = generate_color_map(np.array(Y_t), num_labeled_points, num_soft_labeled_points) if use_gui: if t == 0: # display first label assignment scat_orig_unlabeled.set_array(color_map) scat_labeled.set_array(color_map[:num_labeled_points]) scat_soft_labeled.set_array(color_map[num_labeled_points: num_labeled_points + num_soft_labeled_points]) scat_unlabeled.set_array(color_map[num_labeled_points + num_soft_labeled_points:]) header.set_text("Label propagation. Iteration %i" % aHandler.frame) ___([euclidean(Y_old[i], Y_t[i]) for i in range(Y_old.shape[0])]) if all([euclidean(Y_old[i], Y_t[i]) < 0.022 for i in range(Y_old.shape[0])]) or aHandler.frame >= max_iterations: if use_gui: header.set_text("%s %s" % (header.get_text(), "[end]")) aHandler.stop() except KeyboardInterrupt: aHandler.killed = True aHandler.stop() return
def performAction(self, action): """ Perform the chosen action. """ action = MimicryPreyInteraction.ACTIONS[int(action[0])] nearest_animat = min( self.world.animats + [self.world.predator], key=lambda animat: \ euclidean(self.world.mimicker.coords, animat.coords) ) if euclidean(self.world.mimicker.coords, nearest_animat.coords) > DIST_MAX: relative_color = BKGD_COLOR else: relative_color = abs(nearest_animat.color - BKGD_COLOR) print 'Mimicker action: %s' % action print 'Mimicker color: %s' % self.world.mimicker.color if action != 'NOP': color_id, adjustment = action[0], action[1] self.world.mimicker.adjust_color(color_id, adjustment) print 'New color: %s' % self.world.mimicker.color
def plot_graph(self, data = None, iter = None, directory = "graph_plots\\"): # TODO: this should be generalized and added to Vizualize.py viz = Visualize() if data is not None: viz.do_plot( zip( *data[:iter] )[:3], color = 'y', marker = '.') # viz.do_plot( zip( *data[:iter] )[:3], color = self.data.Y[:iter], marker = '.') viz.do_plot( zip( *self.get_nodes_positions() ), color = 'r', marker = 'o') dis_avg = np.mean([ distance.euclidean(edg.head.data.pos, edg.tail.data.pos) for edg in self.gng.graph.edges ]) dis_std = np.std([ distance.euclidean(edg.head.data.pos, edg.tail.data.pos) for edg in self.gng.graph.edges ]) for e in self.gng.graph.edges: if distance.euclidean(e.head.data.pos, e.tail.data.pos) - dis_avg > 1. * dis_std: viz.do_plot( zip(* [e.head.data.pos, e.tail.data.pos] ) , color = 'y', marker='-') else: viz.do_plot( zip(* [e.head.data.pos, e.tail.data.pos] ) , color = 'r', marker='-') if not os.path.exists(directory): os.makedirs(directory) filename = str(time.time()) + '.png' if iter is None: viz.end_plot(fig = directory+'_'+filename) else: viz.end_plot(fig = directory+filename)
def update_source_neighborSet(sourceID, sourceNodes, relayNodes): """ This function updates the set of the neighboring candidate relays of a source. """ source = sourceNodes[sourceID] neighborSet = source.neighborSet moveIn = [] indices = [index for index in range(len(relayNodes)) if index not in neighborSet] for ind in indices: try: if euclidean(source.position, relayNodes[ind].position) <= const.RADIUS: moveIn.append(ind) except: print "source-%d position:"%(source.ID, ) print source.position print "relay-%d position:"%(relayNodes[ind].ID,) print relayNodes[ind].position raise moveOut = [] for ind in neighborSet: if euclidean(source.position, relayNodes[ind].position) > const.RADIUS: moveOut.append(ind) source.neighborSet = [nID for nID in neighborSet if nID not in moveOut] source.neighborSet.extend(moveIn) return None
def calculate_score(self): # Check if center is in its own neighbors if self.p_coords in self.coordinates: error_message = 'Point given in its own neighborhood for point ' + str(self.p_coords) raise RuntimeError(error_message) # Cohesion: average(distance(coord, center)) distances = [] for coord in self.coordinates: distances.append(distance.euclidean(self.p_coords, coord)) self.cohesion = np.mean(distances) # Adhesion: average(min(distance(coord1,coord2))) tot_libs = len(self.lib_numbers) distances.clear() lib_distances = [] for lib1 in range(tot_libs): for lib2 in range(tot_libs): if lib1 != lib2: lib_distances.append(distance.euclidean(self.coordinates[lib1], self.coordinates[lib2])) distances.append(min(lib_distances)) lib_distances.clear() self.adhesion = np.mean(distances) # Neighborhood score: A/(sqrt(2)*C^2) self.neighbor_score = self.adhesion / ((self.cohesion**2)*math.sqrt(2))
def plot_graph(self, data = None, iter = None, directory = "graph_plots\\"): # TODO: this should be generalized and added to Vizualize.py viz = Visualize() colors = ['r', 'b', 'k', 'g', 'm', 'c']*1000 # FIXME if data is not None: viz.do_plot( zip( *data[:iter] ), color = 'y', marker = '.') # viz.do_plot( zip( *data[:iter] ), color = self.data.Y[:iter], marker = '.') labels=set([n.data.label for n in self.graph.nodes]) d = {l: [n for n in self.graph.nodes if n.data.label == l] for l in labels} for ico,label in enumerate(d): viz.do_plot( zip( *[n.data.pos for n in d[label]] ), color = colors[ico], marker = 'o') for node in d[label]: node_links = [[node.data.pos, n.data.pos] for n in node.neighbors()] for nl in node_links: viz.do_plot( zip( *nl ), color = colors[ico], marker = '-') dis_avg = np.mean([ distance.euclidean(edg.head.data.pos, edg.tail.data.pos) for edg in self.graph.edges ]) dis_std = np.std([ distance.euclidean(edg.head.data.pos, edg.tail.data.pos) for edg in self.graph.edges ]) for e in self.graph.edges: if e.head.data.label != e.tail.data.label: if distance.euclidean(e.head.data.pos, e.tail.data.pos) - dis_avg > 1. * dis_std: viz.do_plot( zip(* [e.head.data.pos, e.tail.data.pos] ) , color = 'w', marker='-') if not os.path.exists(directory): os.makedirs(directory) filename = str(time.time()) + '.png' if iter is None: viz.end_plot(fig = directory+'_'+filename) else: viz.end_plot(fig = directory+filename)
def closest(X, point): smallest = [euclidean(point, X[0][1]), X[0]] distance = 0 for points in X[1:]: distance = euclidean(point, points[1]) if distance < smallest[0]: smallest = [distance, points[0]] return smallest
def Fisher (centroidList, clusterDict): K = len(clusterDict) l = 0 meanSumOfPointDistances = 0.0 for cluster in range(K): mc = len(clusterDict[cluster]) sumOfDistances = 0.0 if len(clusterDict[cluster])>0: l+=1 for point in clusterDict[cluster]: distanceToCentroid = distance.euclidean(point[1],centroidList[cluster]) sumOfDistances += distanceToCentroid meanSumOfPointDistances += (sumOfDistances/float(mc)) else: print "Cluster " + str(cluster) + " is empty" print meanSumOfPointDistances sumOfPairDistances = 0.0 for i in range(K): for j in range(i+1,K): if clusterDict[i] and clusterDict[j]: pairDistance = distance.euclidean(centroidList[i],centroidList[j]) sumOfPairDistances += pairDistance print sumOfPairDistances #res = meanSumOfPointDistances/sumOfPairDistances res = float((l-1.0)/2.0)*meanSumOfPointDistances/sumOfPairDistances return res
def chooser(X, point): smallest = [euclidean(point, (X[0][0], X[0][1])), 0] distance = 0 for i in range(len(X[1:])): distance = euclidean(point, (X[i][0], X[i][1])) if distance < smallest[0]: smallest = [distance, i] return smallest[1]
def _w(self, i, y): ans = self.weights[i] / distance.euclidean(y, self.points[i]) coef = 0. for j in range(len(self.points)): if not np.array_equal(y, self.points[j]): coef = coef + self.weights[j] / distance.euclidean(y, self.points[j]) return ans / coef
def compute_distance(query_channel, channel, mean_vec, distance_type = 'eucos'): """ Compute the specified distance type between chanels of mean vector and query image. In caffe library, FC8 layer consists of 10 channels. Here, we compute distance of distance of each channel (from query image) with respective channel of Mean Activation Vector. In the paper, we considered a hybrid distance eucos which combines euclidean and cosine distance for bouding open space. Alternatively, other distances such as euclidean or cosine can also be used. Input: -------- query_channel: Particular FC8 channel of query image channel: channel number under consideration mean_vec: mean activation vector Output: -------- query_distance : Distance between respective channels """ if distance_type == 'eucos': query_distance = spd.euclidean(mean_vec[channel, :], query_channel)/200. + spd.cosine(mean_vec[channel, :], query_channel) elif distance_type == 'euclidean': query_distance = spd.euclidean(mean_vec[channel, :], query_channel)/200. elif distance_type == 'cosine': query_distance = spd.cosine(mean_vec[channel, :], query_channel) else: print "distance type not known: enter either of eucos, euclidean or cosine" return query_distance
def dist_filter(array, distance): """ Iterate through array and remove spurious tracks :param array: A numpy array of positions (same format as the plotting function) :param distance: Movement greater than this distance is removed :return: The filtered array """ navg = 5 nfilt = 0 for dim in range(1, array.shape[1], 2): for npoint in range(navg, array.shape[0] - navg): point = array[npoint, dim:(dim + 2)] if np.isnan(point[0]): continue else: # Compute mean positions for last and next num_avg frames last_set = array[(npoint - navg):npoint, dim:(dim + 2)] last_set = last_set[np.invert(np.isnan(last_set[:, 0])), :] # wow, numpy is awkward to use last_mean = last_set.mean(axis=0) next_set = array[(npoint + 1):(npoint + 6), dim:(dim + 2)] next_set = next_set[np.invert(np.isnan(next_set[:, 0])), :] # wow, numpy is awkward to use next_mean = next_set.mean(axis=0) # If the tracks move more than the threshold, erase it if (not np.isnan(last_mean[0]) and dist.euclidean(point, last_mean) > distance) or \ (not np.isnan(next_mean[0]) and dist.euclidean(point, next_mean) > distance): array[npoint, dim:(dim + 2)] = np.nan nfilt += 1 print(nfilt, ' false tracks removed from the dataset.') return array
def flatten_marvelous_shells(shells): """ Usage: flatten_marvelous_shells(mc.ls(sl=True)) """ for obj in shells: start_vert, end_vert = mc.ls( mc.polyListComponentConversion(cmds.ls(obj + ".e[0]")[0], fe=True, tv=True), fl=True ) start_scale = distance.euclidean( mc.xform(start_vert, q=True, t=True, ws=True), mc.xform(end_vert, q=True, t=True, ws=True) ) for uv in cmds.ls(obj + ".map[:]", fl=True): uv_pos = mc.polyEditUV(uv, q=True) uv_index = re.findall("\[([0-9]+)\]", uv)[0] vertex = mc.polyListComponentConversion(uv, fuv=True, tv=True)[0] mc.xform(vertex, t=[uv_pos[0]] + [0] + [uv_pos[1]], ws=True) # Finally, scale it end_scale = distance.euclidean( mc.xform(start_vert, q=True, t=True, ws=True), mc.xform(end_vert, q=True, t=True, ws=True) ) scale_by = start_scale / end_scale mc.xform(mc.ls(obj + ".vtx[:]"), s=[scale_by, scale_by, scale_by], ws=True)
def cosine_distance(self,v1, v2): u = dot_product(v1, v2) n1 = np.array(v1.value()) n2 = np.array(v1.value()) d = distance.euclidean(n1 + n1, n1) * distance.euclidean(n2 + n2, n2) ret = u.value() / float(d) return ret
def find_path(self, x0, y0, z0, x, y, z, space=0, timeout=10, digging=False, debug=None): def iter_moveable_adjacent(pos): return self.iter_adjacent(*pos) def iter_diggable_adjacent(pos): return self.iter_adjacent(*pos, degrees=1.5) def is_diggable(current, neighbor): x0, y0, z0 = current x, y, z = neighbor return self.is_diggable(x0, y0, z0, x, y, z) def is_moveable(current, neighbor): x0, y0, z0 = current x, y, z = neighbor return self.is_moveable(x0, y0, z0, x, y, z) def block_breaking_cost(p1, p2, weight=7): x0, y0, z0 = p1 x, y, z = p2 return 1 + len(self.get_blocks_to_break(x0, y0, z0, x, y, z)) * 0.5 # TODO pre-check the destination for a spot to stand log.info('looking for path from: %s to %s', str((x0, y0, z0)), str((x, y, z))) if digging: if not ( self.is_safe_to_break(x, y, z) and self.is_safe_to_break(x, y + 1, z) ): return None neighbor_function = iter_diggable_adjacent #cost_function = block_breaking_cost cost_function = euclidean validation_function = is_diggable else: if space == 0 and not self.is_standable(x, y, z): return None neighbor_function = iter_moveable_adjacent cost_function = euclidean validation_function = is_moveable start = time.time() path = astar.astar( (floor(x0), floor(y0), floor(z0)), # start_pos neighbor_function, # neighbors validation_function, # validation lambda p: euclidean(p, (x, y, z)) <= space, # at_goal 0, # start_g cost_function, # cost lambda p: euclidean(p, (x, y, z)), # heuristic timeout, # timeout debug, # debug digging # digging ) if path is not None: log.info('Path found in %d sec. %d long.', int(time.time() - start), len(path)) return path
def score_for_clonal_single_copy(mu, i): """ Scoring function. The score assigned to a point is the sum of the distance from the point to the line of single copy states and the distance from the point to the y-intercept of the line of single copy states. """ if i not in singleCopyParamInds and i not in zeroCopyParamInds: return float('inf') RDR = mu[0] BAF = mu[1] #y-intercept of the point to be scored b1 = BAF - (m1 * RDR) #x coordinate of point on the line of single copy states closest to the point being scored contactx = (b1 - b0) / (m0 - m1) #y coordinate of point on the line of single copy states closest to the point being scored contacty = (m0 * contactx) + b0 #distance from point being scored to the line of single copy states distToContact = euclidean([RDR, BAF], [contactx, contacty]) #distance from point being scored to the y-intercept of the line of single copy states. distToIntercept = euclidean([RDR, BAF], [0.0, b0]) score = distToContact + distToIntercept return score
contour for contour in contours_from_filtered_image if cv2.contourArea(contour) < 100 ] # Draw real contours cv2.drawContours(original_image, largest_contours_from_filtered_image, -1, (0, 255, 0), 3) # Reference from red square (3x3 cm) reference_object = largest_contours_from_filtered_image[0] # https://docs.opencv.org/trunk/dd/d49/tutorial_py_contour_features.html reference_rect = cv2.minAreaRect(reference_object) box = cv2.boxPoints(reference_rect) box = np.array(box, dtype="int") (tl, tr, br, bl) = perspective.order_points(box) dist_in_pixel = euclidean(tl, tr) dist_in_cm = 10 pixel_per_cm = dist_in_pixel / dist_in_cm for contour in largest_contours_from_filtered_image: object = cv2.minAreaRect(contour) object_box_points = cv2.boxPoints(object) box = np.array(object_box_points, dtype="int") (top_left, top_right, bottom_left, bottom_right) = perspective.order_points(box) cv2.drawContours(original_image, [box.astype("int")], -1, (0, 0, 255), 2) middle_point_horizontal = (top_left[0] + int(abs(top_right[0] - top_left[0]) / 2), top_left[1] + int(abs(top_right[1] - top_left[1]) / 2)) middle_point_vertical = (top_right[0] +
def getCloudHull(xyc,width=64,height=64,perimeterSubdivisionSteps=4,smoothing=0.001, autoPerimeterOffset=True, perimeterOffset=None, autoPerimeterDensity=True): tree = KDTree(xyc, leafsize=10) hull = ConvexHull(xyc) hullPoints = [] hullIndices = {} for i in range(len(hull.vertices)): hullIndices[hull.vertices[i]] = True hullPoints.append(xyc[hull.vertices[i]]) for j in range(perimeterSubdivisionSteps): io = 0 for i in range(len(hullPoints)): index = tree.query(lerp(hullPoints[i+io],hullPoints[(i+1+io)%len(hullPoints)],0.5))[1] if not (index in hullIndices): hullPoints.insert( i+io+1, xyc[index]) hullIndices[index] = True io += 1 perimeterLength = 0 for i in range(len(hullPoints)): perimeterLength += distance.euclidean(hullPoints[i],hullPoints[(i+1)%len(hullPoints)]) perimeterCount = 2 * (width + height) - 4 perimeterStep = perimeterLength / perimeterCount perimeterPoints = [] perimeterDensity = np.zeros(perimeterCount) for i in range(perimeterCount): t = 1.0 * i / perimeterCount poh = getPointOnHull(hullPoints,t,perimeterLength) perimeterPoints.append(poh) perimeterDensity[i] = np.mean(tree.query(poh,k=32)[0]) if autoPerimeterOffset: bestDensity = perimeterDensity[0] + perimeterDensity[width-1] + perimeterDensity[width+height-2] + perimeterDensity[2*width+height-3] perimeterOffset = 0 for i in range(1,width+height ): density = perimeterDensity[i] + perimeterDensity[(i+width-1)%perimeterCount] + perimeterDensity[(i+width+height-2)%perimeterCount] + perimeterDensity[(i+2*width+height-3)%perimeterCount] if density < bestDensity: bestDensity = density perimeterOffset = i elif perimeterOffset is None: perimeterOffset = 0 corner = [np.min(xyc[:,0]),np.min(xyc[:,1])] d = corner-perimeterPoints[0] clostestDistanceToCorner = np.hypot(d[0],d[1]) for i in range(1,perimeterCount): d = corner-perimeterPoints[i] distanceToCorner = np.hypot(d[0],d[1]) if ( distanceToCorner < clostestDistanceToCorner): clostestDistanceToCorner = distanceToCorner perimeterOffset = i perimeterPoints = np.array(perimeterPoints) if perimeterOffset > 0: perimeterPoints[:,0] = np.roll(perimeterPoints[:,0], - perimeterOffset) perimeterPoints[:,1] = np.roll(perimeterPoints[:,1], - perimeterOffset) perimeterPoints = np.append(perimeterPoints,[perimeterPoints[0]],axis=0) bounds = {'top':perimeterPoints[0:width], 'right':perimeterPoints[width-1:width+height-1], 'bottom':perimeterPoints[width+height-2:2*width+height-2], 'left':perimeterPoints[2*width+height-3:]} bounds['s_top'],u = interpolate.splprep([bounds['top'][:,0], bounds['top'][:,1]],s=smoothing) bounds['s_right'],u = interpolate.splprep([bounds['right'][:,0],bounds['right'][:,1]],s=smoothing) bounds['s_bottom'],u = interpolate.splprep([bounds['bottom'][:,0],bounds['bottom'][:,1]],s=smoothing) bounds['s_left'],u = interpolate.splprep([bounds['left'][:,0],bounds['left'][:,1]],s=smoothing) densities = None if autoPerimeterDensity: densities = {} density_top = np.zeros(len(bounds['top'])) for i in range(len(density_top)): t = 1.0 * i / len(density_top) density_top[i] = np.mean(tree.query( np.array(interpolate.splev( t,bounds['s_top'])).flatten(),k=64)[0]) density_top /= np.sum(density_top) density_right = np.zeros(len(bounds['right'])) for i in range(len(density_right)): t = 1.0 * i / len(density_right) density_right[i] = np.mean(tree.query( np.array(interpolate.splev( t,bounds['s_right'])).flatten(),k=64)[0]) density_right /= np.sum(density_right) density_bottom = np.zeros(len(bounds['bottom'])) for i in range(len(density_bottom)): t = 1.0 * i / len(density_bottom) density_bottom[i] = np.mean(tree.query( np.array(interpolate.splev( t,bounds['s_bottom'])).flatten(),k=64)[0]) density_bottom /= np.sum(density_bottom) density_left = np.zeros(len(bounds['left'])) for i in range(len(density_left)): t = 1.0 * i / len(density_left) density_left[i] = np.mean(tree.query( np.array(interpolate.splev( t,bounds['s_left'])).flatten(),k=64)[0]) density_left /= np.sum(density_left) densities = {'top':density_top,'right':density_right,'bottom':density_bottom,'left':density_left} return bounds, densities
def distEclud2(x, y): return distance.euclidean(x, y)
def euc(a, b): return distance.euclidean(a, b)
def cal_length(life_points, int_points, aff_points, finger_points): life_length = euclidean(life_points[0], life_points[1]) + euclidean( life_points[1], life_points[2]) + euclidean( life_points[2], life_points[3]) + euclidean( life_points[3], life_points[4]) int_length = euclidean(int_points[0], int_points[1]) + euclidean( int_points[1], int_points[2]) + euclidean(int_points[2], int_points[3]) + euclidean( int_points[3], int_points[4]) aff_length = euclidean(aff_points[0], aff_points[1]) + euclidean( aff_points[1], aff_points[2]) + euclidean(aff_points[2], aff_points[3]) + euclidean( aff_points[3], aff_points[4]) hand_length = euclidean(finger_points[3], finger_points[4]) finger_width = (euclidean(finger_points[0], finger_points[1]) + euclidean(finger_points[1], finger_points[2])) / 2 return life_length, int_length, aff_length, hand_length, finger_width
(770.0,610.0), (795.0,645.0), (720.0,635.0), (760.0,650.0), (475.0,960.0), (95.0,260.0), (875.0,920.0), (700.0,500.0), (555.0,815.0), (830.0,485.0), (1170.0,65.0), (830.0,610.0), (605.0,625.0), (595.0,360.0), (1340.0,725.0), (1740.0,245.0) ] #no return to origin city cities = [1,49,32,45,19,41,8,9,10,43,33,51,11,52,14,13,47,26,27,28,12,25,4,6,15,5,24,48,38,37,40,39,36,35,34,44,46,16,29,50,20,23,30,2,7,42,21,17,3,18,31,22] from scipy.spatial import distance L = 0 for i in range(1, len(cities)): cid1= coordinates[cities[i-1]-1] cid2= coordinates[cities[i]-1] L = L + distance.euclidean(cid1, cid2) print(L)
for it in range(iterations): pList = [] df = pd.read_csv(path1 + '/' + movement + '%01d' % it + '.csv') for i in range(nag): particle = Particle('Particle%02d' % i, [df['x'][i], df['y'][i]], [df['dpx'][i], df['dpy'][i]]) pList.append(particle) for particle in pList: for neighbour in pList: if particle.name == neighbour.name: continue particle.neighbour_distances.append( distance.euclidean(particle.position, neighbour.position)) particle.n_dphis.append(neighbour.dphi) for radius in radii: counter = 0 local_c = 0 for dist in particle.neighbour_distances: if radius <= dist <= (radius + 0.01): index = particle.neighbour_distances.index(dist) counter += 1 local_c += np.dot(particle.dphi, particle.n_dphis[index]) if counter > 0: particle.correlation.append(local_c / counter) else: particle.correlation.append(np.nan)
d = 1000 d_max = 0 d_average = 0 d_best = [] d_maxp = [] image = cv2.imread(photopath) examples = [] for j in prediicttxt: pred = [int(j[0]), int(j[1]), int(j[2]), int(j[3])] gt = [0, 0, 0, 0] dist = 10000 for i in groundtxt: tmp_gt = [float(i[1]), float(i[2]), float(i[3]), float(i[4])] dist_tmp = distance.euclidean( (int(wp * tmp_gt[0]), int(hp * tmp_gt[1])), (int(pred[0] + ((pred[2] - pred[0]) / 2)), int(pred[1] + ((pred[3] - pred[1]) / 2)))) if dist > dist_tmp: gt = tmp_gt dist = dist_tmp xgt = int((wp * gt[0]) - ((wp * gt[2]) / 2)) ygt = int(hp * gt[1] - ((hp * gt[3]) / 2)) wgt = int(((wp * gt[0]) - ((wp * gt[2]) / 2)) + wp * gt[2]) hgt = int((hp * gt[1] - ((hp * gt[3]) / 2)) + hp * gt[3]) xpr = int((wp * pred[0]) - ((wp * pred[2]) / 2)) ypr = int(hp * pred[1] - ((hp * pred[3]) / 2)) wpr = int(((wp * pred[0]) - ((wp * pred[2]) / 2)) + wp * pred[2]) hpr = int((hp * pred[1] - ((hp * pred[3]) / 2)) + hp * pred[3])
def getInterAtomicDistances(cycle): cr = getPos(cycle[0]) res = map(lambda x: {'atom':str(x['num'])+'.'+x['atom'], 'dist':distance.euclidean(getPos(x), cr)}, cycle[1:]) return list(res)
S_w = np.array(S_w) vector_sum = S_w.sum(axis=0) return (vector_sum / np.sqrt((vector_sum**2).sum())).mean() # additional Features data['sen2vec_q1'] = data.question1.apply(lambda x: sen2vect(x)) data['sen2vec_q2'] = data.question2.apply(lambda x: sen2vect(x)) # word vector distance from scipy.spatial.distance import cosine, cityblock, euclidean, jaccard data['cosine'] = cosine(np.nan_to_num(data['sen2vec_q1']), np.nan_to_num(data['sen2vec_q2'])) data['cityblock'] = cityblock(np.nan_to_num(data['sen2vec_q1']), np.nan_to_num(data['sen2vec_q2'])) data['euclidean'] = euclidean(np.nan_to_num(data['sen2vec_q1']), np.nan_to_num(data['sen2vec_q2'])) data['jaccard'] = jaccard(np.nan_to_num(data['sen2vec_q1']), np.nan_to_num(data['sen2vec_q2'])) data.head(3) data = data.drop([ 'Unnamed: 0', 'id', 'qid1', 'qid2', 'question1', 'question2', 'fuzz_qratio' ], axis=1) Y = data['is_duplicate'] x = data.drop(['is_duplicate'], axis=1) x.isnull().sum() x = x.fillna(0) from sklearn.preprocessing import StandardScaler scaler = StandardScaler().fit(x)
def physical_distance(location1, location2): return euclidean(tuple(location1[2]), tuple(location2[2]))
def motion_tracking(url, model, classes, video_name, file_name, skip_frame, min_dist_thresh, removing_thresh, confi_thresh, start_sec=0, end_sec=None): ''' --input-- url: url of the video, model: SSD model, classes: dict(class:index), video_name: file name of the processed video, file_name: file name of the entire motion tracking history, skip_frame: number of frames to skip tracking, min_dist_thresh: minimum euclidian distance to differentiate two different objects, removing_thresh: number of the identicle position history to determine the object left the scene, confi_thresh: confidence threshold for SSD object detection [0,1], start_sec: start of the video to be processed (seconds), end_sec: end of the video to be processed (seconds) --output-- sorted_archive: dictionary of the full motion history sorted by the created timestamp background: background image with moving objects filtered out ''' initial_time = dt.datetime.fromtimestamp(tm.time()).replace(microsecond=0) background, vid_height, vid_width, FPS, title, length = video_background( url, alpha=0.005) height_fix_factor = vid_height / 512 width_fix_factor = vid_width / 512 label_to_track = 'person' frame_count = 0 print('processing starts at {:.2f} sec'.format(start_sec)) pa = pafy.new(url) play = pa.getbest(preftype='webm') cap = cv2.VideoCapture(play.url) if (cap.isOpened() == False): print('cannot read a video') track_history = {} moving_tracker = {} archive = {} customer_idx = 1 # customer id fourcc = cv2.VideoWriter_fourcc(*'XVID') out = cv2.VideoWriter(os.path.join('static', video_name + '.avi'), fourcc, FPS, (vid_width, vid_height)) if end_sec is None: end_sec = pa.length while cap.isOpened() and frame_count < np.floor(end_sec * FPS): ret, frame = cap.read() # frame = frame.astype('uint8') #### NEW LINE if frame_count < np.floor(start_sec * FPS): frame_count += 1 continue if ret == True: frame = np.asarray(frame) orig_frame = np.copy(frame) if frame_count % skip_frame == 0: frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) frame = cv2.resize(frame, (512, 512)) frame = np.expand_dims(frame, 0) current_centroids = [ ] # reset current centroid list at every processing frame # for the very first frame if not track_history: result = model.predict(frame) results = result[0][result[0, :, 1] >= confi_thresh] results = filter_by_classes(results, classes, labels=[label_to_track]) for r in results: r[2] = xmin = int(r[2] * width_fix_factor) r[3] = ymin = int(r[3] * height_fix_factor) r[4] = xmax = int(r[4] * width_fix_factor) r[5] = ymax = int(r[5] * height_fix_factor) centroid = (int(np.mean( (xmin, xmax))), int(np.mean((ymin, ymax)))) # current_centroids.append(centroid) # add the list of positions track_history[customer_idx] = [ initial_time + dt.timedelta(seconds=frame_count / FPS), [centroid] ] # initialize moving racker moving_tracker[customer_idx] = 0 customer_idx += 1 else: result = model.predict(frame) results = result[0][result[0, :, 1] >= confi_thresh] results = filter_by_classes(results, classes, labels=[label_to_track]) for r in results: r[2] = xmin = int(r[2] * width_fix_factor) r[3] = ymin = int(r[3] * height_fix_factor) r[4] = xmax = int(r[4] * width_fix_factor) r[5] = ymax = int(r[5] * height_fix_factor) centroid = (int(np.mean( (xmin, xmax))), int(np.mean((ymin, ymax)))) current_centroids.append(centroid) # print('for frame {}: {}'.format(frame_count,current_centroids))####################################### track_history_temp = copy.deepcopy(track_history) track_history_key_temp = copy.deepcopy( list(track_history.keys())) # comparison for cent in current_centroids: min_dist = min_dist_thresh min_label = None for label in track_history_key_temp: dist = distance.euclidean( cent, track_history_temp[label][-1][-1]) if dist < min_dist: min_dist = dist min_label = label # for same label centroid if min_label is not None: if min_dist == 0: # if object not moved, increase the moving tracker counter by 1 moving_tracker[min_label] += 1 else: # if moved, reset the tracker counter moving_tracker[min_label] = 0 track_history_temp[min_label][-1].append(cent) track_history_key_temp.remove(min_label) # min_label is NONE --> NEW object in the scene else: track_history_temp[customer_idx] = [ initial_time + dt.timedelta(seconds=frame_count / FPS), [cent] ] moving_tracker[customer_idx] = 0 customer_idx += 1 # object hidden or exit if track_history_key_temp: for left_over in track_history_key_temp: moving_tracker[left_over] += 1 track_history_temp[left_over][-1].append( track_history_temp[left_over][-1][-1]) track_history = track_history_temp # update the history # print('for frame {} dict: {}'.format(frame_count,track_history))####################################### # print('moving tracker: {}'.format(moving_tracker)) # print('\n') # generate orig_frame based on track_history for idx, loc in track_history.items(): cv2.circle(orig_frame, loc[-1][-1], 10, color_by_index(idx), cv2.FILLED) # move the unmoving objects to the archive dictionary moving_tracker_temp = copy.deepcopy(moving_tracker) for obj, counter in moving_tracker_temp.items(): if counter == removing_thresh: archive[obj] = [ track_history[obj][0], track_history[obj][-1][:-removing_thresh] ] del track_history[obj] del moving_tracker[obj] print('>', end='') # in-between frames else: for idx, loc in track_history.items(): cv2.circle(orig_frame, loc[-1][-1], 10, color_by_index(idx), cv2.FILLED) out.write(orig_frame) frame_count += 1 else: break print('\n') print('proccesing finished at {:.2f} sec'.format(frame_count / FPS)) print('Total time processed: {:.2f} sec'.format(frame_count / FPS - start_sec)) cap.release() out.release() # after all, move all to archive moving_tracker_temp = copy.deepcopy(moving_tracker) for obj, counter in moving_tracker_temp.items(): archive[obj] = [track_history[obj][0], track_history[obj][-1]] del track_history[obj] del moving_tracker[obj] sorted_archive = sorted(archive.items(), key=lambda kv: kv[1][0], reverse=False) sorted_archive = collections.OrderedDict(sorted_archive) pickle.dump(sorted_archive, open(file_name + '.pkl', 'wb')) pickle.dump(background, open('background' + '.pkl', 'wb')) print('processed video saved as: "{}.avi"'.format(video_name)) print('file saved as: "{}.txt"'.format(file_name)) return sorted_archive, background, title, initial_time, length
centroids = attributes[selected_centroids, :] no_of_iterations = int(input("Enter the max number of iterations: ")) for i in range(no_of_iterations): clusters = defaultdict(list) cluster_temp = [] for j in range(rows): current_ans = float('inf') centroid_choice = None centroid_dist = [] for l in range(len(centroids)): current_distance = distance.euclidean(attributes[j],centroids[l]) if current_distance < current_ans: current_ans = current_distance centroid_choice = l clusters[centroid_choice].append(j) centroids_new =[] for l in range(len(centroids)): relevant_attributes = attributes[clusters[l],:] if len(relevant_attributes) == 0: centroids_new.append(centroids[l]) else: centroids_new.append(np.mean(relevant_attributes , axis= 0)) if np.array_equal(centroids , centroids_new): break centroids = centroids_new
def get_distance(M, p1, p2): point_one = M.intersections[p1] point_two = M.intersections[p2] return distance.euclidean(point_one, point_two)
def _distance(point_1, point_2): dist = distance.euclidean(point_1.coordinates, point_2.coordinates) return dist
def angle(robot): topIDs = [] # i.e. the two circles on the flat end of the robot bottomIDs = [] theta1 = 999 # An impossible number for if statements later theta2 = 999 # Uses the cosine law to figure out the angle every possible combo of ID circles makes # with the center of the robot (team ID), assigning to top or bottom IDs based on this angle for ii in range(len(robot.circles) - 1): for jj in range(ii + 1, len(robot.circles)): temp1 = robot.circles[ii] temp2 = robot.circles[jj] # Determining distance between the different IDs a = dist.euclidean([temp1[0], temp1[1]], robot.pos) # Distance from ID 1 to centre b = dist.euclidean([temp2[0], temp2[1]], robot.pos) # Distance from ID 2 to centre c = dist.euclidean( [temp1[0], temp1[1]], [temp2[0], temp2[1]]) # Distance from ID 1 to ID 2 try: theta = math.degrees( math.acos((c**2 - b**2 - a**2) / (-2.0 * a * b))) #CRASHES ON RARE OCCASIONS except: print('Theta Error') if theta > 100 and theta < 130: # Ideally 114.84 degrees topIDs.append(temp1) topIDs.append(temp2) if theta > 45 and theta < 75: # Ideally 65.16 degrees bottomIDs.append(temp1) bottomIDs.append(temp2) # the other ID pairs will be either ~180 or ~90 degrees # Takes the top two IDs and their average position, creating a vector to that point from the # center of the robot which the robot's angle can be derived from if len(topIDs) == 2: xMean = (topIDs[0][0] + topIDs[1][0]) / 2 yMean = (topIDs[0][1] + topIDs[1][1]) / 2 xDiff = xMean - robot.pos[0] yDiff = yMean - robot.pos[1] # Angle points in the direction the robot is facing theta1 = math.degrees(math.atan2(yDiff, xDiff)) #else: #print("top went wrong...") # Takes the bottom two IDs and their average position, creating a vector from that point to # the center of the robot which the robot's angle can be derived from # (this is the opposite direction from the other one so the angle will be the same) if len(bottomIDs) == 2: xMean2 = (bottomIDs[0][0] + bottomIDs[1][0]) / 2 yMean2 = (bottomIDs[0][1] + bottomIDs[1][1]) / 2 xDiff2 = robot.pos[0] - xMean2 yDiff2 = robot.pos[1] - yMean2 # Negative for both of these to get an angle that is front facing theta2 = math.degrees(math.atan2(yDiff2, xDiff2)) #else: # print("bottom is wrong") # Averages the vectors to get a better approx of the true angle if theta2 != 999 and theta1 != 999: xMean = (math.cos(math.radians(theta1)) + math.cos(math.radians(theta2))) / 2 yMean = (math.sin(math.radians(theta1)) + math.sin(math.radians(theta2))) / 2 theta = math.degrees(math.atan2(yMean, xMean)) robot.angle = theta # If one of the vector calcs failed, just take the one that worked elif theta2 != 999 and theta1 == 999: theta = theta2 robot.angle = theta elif theta2 == 999 and theta1 != 999: theta = theta1 robot.angle = theta else: return "ERROR" reassignIDs(robot, topIDs, bottomIDs)
reverse=True): if not itemID in watched: movieID = trainSet.to_raw_iid(itemID) if (ratingSum > 5): ratingSum = 5 print(ml.getMovieName(int(movieID)), int(ratingSum)) pos += 1 if (pos > 9): break print("\nNew Recommendations:") # Get top-rated items from similar users: pos = 0 for itemID, ratingSum in sorted(candidates.items(), key=itemgetter(1), reverse=True): if not itemID in watched: movieID = trainSet.to_raw_iid(itemID) dst = distance.euclidean(usergenre, genres[int(movieID)]) if (0 < dst < 2.2): ratingSum += ratingSum * 1 / int(dst) if (ratingSum > 5): ratingSum = 5 print(ml.getMovieName(int(movieID)), int(ratingSum)) else: pos -= 1 pos += 1 if (pos > 9): break
def distance(self, other): return euclidean(self.position, other.position)
lineShow = [ ] # holds the coordinates for line segment between two fingers pixelsPerMetric = None # pixel ratio for conversion of pixels to mm (unit pixels/mm) i = 0 # contour count for box in contoursArr: i = i + 1 # draws the contour in original image (just for display purpose) cv2.drawContours(img, [box.astype("int")], -1, (0, 255, 0), 2) (midTop, midBot, center, midLeft, midRight, lowest, highest) = findMidTopBotCenter(box) # in first contour (standard box), pixel ratio is calculated as (actual distance in pixels/actual distance in mm) if pixelsPerMetric is None: dA = dist.euclidean(midLeft, midRight) pixelsPerMetric = dA / actualWidth if i == 1: continue if i == 3: lineShow.append( lowest) # Append the midBottom point of box if red contour if i == 2: lineShow.append( highest) # Append the midTop point of box if blue contour midPoints.append(center) # append the midpoints of contour lineShow = lineCheck( lineShow
def testcreate_feature_list(self): atom_features, ring_features = self.arpeggio_2vta.create_feature_list("INTER") fs = atom_features + ring_features pairs = [[[f.point.x, f.point.y, f.point.z], [f.projected.x, f.projected.y, f.projected.z]] for f in fs] d = [distance.euclidean(point, proj) for point, proj in pairs] self.assertTrue(all(np.less(d, 8)))
def cluster_labels(word_vector, iterations, cluster_number): centers= random.sample(word_vector,cluster_number). # here the initial centers are choosen at random. for i in range(0,iterations): # the algorithm is repeated for n iterations label_vector=[] # a list to store the cluster for each item for vector in word_vector: distances=[] for center in centers: # in this cycle for each element is computed the distances with the centers dist=distance.euclidean(center,vector) distances.append(dist) label_vector.append(np.argmin(distances)) # as label is added the minimum distance index print(f'cluster labels for iteration {i} for the first 100 items are {label_vector[0:100]}') # bonus question: dispaling the iteration process for k in range(cluster_number): # in this cycle the centers are updated cluster_sum=np.zeros(len(word_vector[0])) cluster_count=0 for j in range(len(word_vector)): if k==label_vector[j]: cluster_sum=cluster_sum+word_vector[j] cluster_count+=1 if cluster_count!=0: # in this case the center value is updated, otherwise the center stays the same centers[k]=(cluster_sum/cluster_count) return label_vector
def eye_aspect_ratio(self, eye): A = euclidean(eye[1], eye[5]) B = euclidean(eye[2], eye[4]) C = euclidean(eye[0], eye[3]) return (A + B) / (2.0 * C)
def geometric_slerp(start, end, t, tol=1e-7): """ Geometric spherical linear interpolation. The interpolation occurs along a unit-radius great circle arc in arbitrary dimensional space. Parameters ---------- start : (n_dimensions, ) array-like Single n-dimensional input coordinate in a 1-D array-like object. `n` must be greater than 1. end : (n_dimensions, ) array-like Single n-dimensional input coordinate in a 1-D array-like object. `n` must be greater than 1. t: float or (n_points,) 1D array-like A float or 1D array-like of doubles representing interpolation parameters, with values required in the inclusive interval between 0 and 1. A common approach is to generate the array with ``np.linspace(0, 1, n_pts)`` for linearly spaced points. Ascending, descending, and scrambled orders are permitted. tol: float The absolute tolerance for determining if the start and end coordinates are antipodes. Returns ------- result : (t.size, D) An array of doubles containing the interpolated spherical path and including start and end when 0 and 1 t are used. The interpolated values should correspond to the same sort order provided in the t array. The result may be 1-dimensional if ``t`` is a float. Raises ------ ValueError If ``start`` and ``end`` are antipodes, not on the unit n-sphere, or for a variety of degenerate conditions. Notes ----- The implementation is based on the mathematical formula provided in [1]_, and the first known presentation of this algorithm, derived from study of 4-D geometry, is credited to Glenn Davis in a footnote of the original quaternion Slerp publication by Ken Shoemake [2]_. .. versionadded:: 1.5.0 References ---------- .. [1] https://en.wikipedia.org/wiki/Slerp#Geometric_Slerp .. [2] Ken Shoemake (1985) Animating rotation with quaternion curves. ACM SIGGRAPH Computer Graphics, 19(3): 245-254. See Also -------- scipy.spatial.transform.Slerp : 3-D Slerp that works with quaternions Examples -------- Interpolate four linearly-spaced values on the circumference of a circle spanning 90 degrees: >>> from scipy.spatial import geometric_slerp >>> import matplotlib.pyplot as plt >>> fig = plt.figure() >>> ax = fig.add_subplot(111) >>> start = np.array([1, 0]) >>> end = np.array([0, 1]) >>> t_vals = np.linspace(0, 1, 4) >>> result = geometric_slerp(start, ... end, ... t_vals) The interpolated results should be at 30 degree intervals recognizable on the unit circle: >>> ax.scatter(result[...,0], result[...,1], c='k') >>> circle = plt.Circle((0, 0), 1, color='grey') >>> ax.add_artist(circle) >>> ax.set_aspect('equal') >>> plt.show() Attempting to interpolate between antipodes on a circle is ambiguous because there are two possible paths, and on a sphere there are infinite possible paths on the geodesic surface. Nonetheless, one of the ambiguous paths is returned along with a warning: >>> opposite_pole = np.array([-1, 0]) >>> with np.testing.suppress_warnings() as sup: ... sup.filter(UserWarning) ... geometric_slerp(start, ... opposite_pole, ... t_vals) array([[ 1.00000000e+00, 0.00000000e+00], [ 5.00000000e-01, 8.66025404e-01], [-5.00000000e-01, 8.66025404e-01], [-1.00000000e+00, 1.22464680e-16]]) Extend the original example to a sphere and plot interpolation points in 3D: >>> from mpl_toolkits.mplot3d import proj3d >>> fig = plt.figure() >>> ax = fig.add_subplot(111, projection='3d') Plot the unit sphere for reference (optional): >>> u = np.linspace(0, 2 * np.pi, 100) >>> v = np.linspace(0, np.pi, 100) >>> x = np.outer(np.cos(u), np.sin(v)) >>> y = np.outer(np.sin(u), np.sin(v)) >>> z = np.outer(np.ones(np.size(u)), np.cos(v)) >>> ax.plot_surface(x, y, z, color='y', alpha=0.1) Interpolating over a larger number of points may provide the appearance of a smooth curve on the surface of the sphere, which is also useful for discretized integration calculations on a sphere surface: >>> start = np.array([1, 0, 0]) >>> end = np.array([0, 0, 1]) >>> t_vals = np.linspace(0, 1, 200) >>> result = geometric_slerp(start, ... end, ... t_vals) >>> ax.plot(result[...,0], ... result[...,1], ... result[...,2], ... c='k') >>> plt.show() """ start = np.asarray(start, dtype=np.float64) end = np.asarray(end, dtype=np.float64) t = np.asarray(t) if t.ndim > 1: raise ValueError("The interpolation parameter " "value must be one dimensional.") if start.ndim != 1 or end.ndim != 1: raise ValueError("Start and end coordinates " "must be one-dimensional") if start.size != end.size: raise ValueError("The dimensions of start and " "end must match (have same size)") if start.size < 2 or end.size < 2: raise ValueError("The start and end coordinates must " "both be in at least two-dimensional " "space") if np.array_equal(start, end): return np.linspace(start, start, t.size) # for points that violate equation for n-sphere for coord in [start, end]: if not np.allclose(np.linalg.norm(coord), 1.0, rtol=1e-9, atol=0): raise ValueError("start and end are not" " on a unit n-sphere") if not isinstance(tol, float): raise ValueError("tol must be a float") else: tol = np.fabs(tol) coord_dist = euclidean(start, end) # diameter of 2 within tolerance means antipodes, which is a problem # for all unit n-spheres (even the 0-sphere would have an ambiguous path) if np.allclose(coord_dist, 2.0, rtol=0, atol=tol): warnings.warn("start and end are antipodes" " using the specified tolerance;" " this may cause ambiguous slerp paths") t = np.asarray(t, dtype=np.float64) if t.size == 0: return np.empty((0, start.size)) if t.min() < 0 or t.max() > 1: raise ValueError("interpolation parameter must be in [0, 1]") if t.ndim == 0: return _geometric_slerp(start, end, np.atleast_1d(t)).ravel() else: return _geometric_slerp(start, end, t)
def euc(a, b): return distance.euclidean( a, b) # find out how does it mesures the distance between features!
def eyes(): # Landmark model location PREDICTOR_PATH = "shape_predictor_68_face_landmarks.dat" # Get the face detector faceDetector = dlib.get_frontal_face_detector() # The landmark detector is implemented in the shape_predictor class landmarkDetector = dlib.shape_predictor(PREDICTOR_PATH) # Read image #imageFilename = "hillary_clinton.jpg" cam = cv2.VideoCapture(0) global temp global frequency global durationq fourcc = cv2.VideoWriter_fourcc(*'XVID') out = cv2.VideoWriter('output.avi',fourcc, 10.0, (640,480)) points=[] wideness=[] between=[] m=0 consecframes=0 consecframes3=0 consecframes4=0 while 1: ret, img = cam.read() #im= cv2.imread(imageFilename) # landmarks will be stored in results/family_i.txt landmarksBasename = "results/faces" # Detect faces in the image faceRects = faceDetector(img, 0) # List to store landmarks of all detected faces landmarksAll = [] # Loop over all detected face rectangles for i in range(0, len(faceRects)): newRect = dlib.rectangle(int(faceRects[i].left()),int(faceRects[i].top()),int(faceRects[i].right()),int(faceRects[i].bottom())) # For every face rectangle, run landmarkDetector landmarks = landmarkDetector(img, newRect) # Store landmarks for current face landmarksAll.append(landmarks) # Draw landmarks on face #renderFace(img, landmarks) landmarksFileName = landmarksBasename +"_"+ str(i)+ ".txt" # print("Saving landmarks to", landmarksFileName) # Writelandmarks to disk coords,coords2= writeLandmarksToFile(landmarks, landmarksFileName) #print(coords) #for i in coords: #cv2.circle(img,tuple(i), 1, (0,0,255), 1 ) left2right=dist.euclidean(coords[6],coords[0]) #0.412 lefteyewideness=dist.euclidean(coords[3],coords[0]) righteyewideness=dist.euclidean(coords[1],coords[5])#0.12 x1=coords[0][0] y1=min(coords[1][1],coords[2][1]) w1=coords[3][0] h1=max(coords[4][1],coords[5][1]) x2=coords[6][0] y2=min(coords[7][1],coords[8][1]) w2=coords[9][0] h2=max(coords[10][1],coords[11][1]) global left_coords global right_coords for i in range(6): right_coords.append([coords[i][0]-x1,coords[i][1]-y1]) left_coords.append([coords[i+6][0]-x2,coords[i+6][1]-y2]) #cv2.rectangle(img,(x1,y1),(w1,h1),(255,255,0),1) center_right=None center_left=None eye_crop_right=img[y1:h1,x1:w1] eye_crop_right_gray=cv2.cvtColor(eye_crop_right,cv2.COLOR_BGR2GRAY) #eye_crop_right_mask=mask(eye_crop_right,right_coords) center_right=iris_center(eye_crop_right_gray) #cv2.imshow("right",eye_crop_right_mask) #print(center_right) cv2.circle(eye_crop_right,center_right,5,(0,0,255),2) eye_crop_left=img[y2:h2,x2:w2] eye_crop_left_gray=cv2.cvtColor(eye_crop_left,cv2.COLOR_BGR2GRAY) #eye_crop_left_mask=mask(eye_crop_left,left_coords) center_left=iris_center(eye_crop_left_gray) #cv2.imshow("left",eye_crop_left_mask) ##print(center_left) cv2.circle(eye_crop_left,center_left,5,(0,0,255),2) #cv2.imshow("crop",eye_crop_left) EAR1=( (dist.euclidean(coords[1],coords[5])+dist.euclidean(coords[2],coords[4]))/left2right) EAR=( (dist.euclidean(coords[1],coords[5])+dist.euclidean(coords[2],coords[4]))/(2*dist.euclidean(coords[0],coords[3]))) if callibration == True: global average global temp3 global ratios global c global state_left global iris global temp3 global temp4 global temp5 global iris_ratios global number_of_iris_coords a= win32api.GetKeyState(0x01) if a != state_left: state_left = a if average==1: number_of_iris_coords.append(len(temp)) ratios[c]=mean(temp) c+=1 average=0 temp=[] if a < 0: if (EAR1): temp.append(EAR1) if(center_right): if (stop_con): iris.append(center_right) else: iris.append([-1,-1]) #print(iris) if iris_average==1: #print("number_of_iris_coords :",number_of_iris_coords) iris_ratios=iris_mean(iris,number_of_iris_coords) #print(ratios) if callibration==False: #print("ear after callib " ,EAR1) cal.callibration(ratios,EAR1,iris_ratios,center_right) if EAR1 > ratios[0]+.05 and speed!=0: consecframes4+=1 if consecframes4>=20 and speed>20: winsound.Beep(frequency3,duration) else: consecframes4=0 if EAR < ratios[2]and speed!=0: consecframes +=1 if consecframes >=3: # Set Duration To 1000 ms == 1 second winsound.Beep(frequency2, duration) else: consecframes=0 if consecframes2>=15 and speed >20: winsound.Beep(frequency,duration) if dist.euclidean(coords2[0],coords2[2])/dist.euclidean(coords2[1],coords2[3])>1.5 or dist.euclidean(coords2[1],coords2[3])/dist.euclidean(coords2[0],coords2[2]) >1.5: consecframes3+=1 if consecframes3>=20 and speed>20: winsound.Beep(frequency3,duration) else: consecframes3=0 out.write(img) cv2.imshow("Facial Landmark detector", img) # cv2.draw k= cv2.waitKey(10) & 0xff if k==27: break
def deviation_from_overall(vis: Vis, ldf: LuxDataFrame, filter_specs: list, msr_attribute: str) -> int: """ Difference in bar chart/histogram shape from overall chart Note: this function assumes that the filtered vis.data is operating on the same range as the unfiltered vis.data. Parameters ---------- vis : Vis ldf : LuxDataFrame filter_specs : list List of filters from the Vis msr_attribute : str The attribute name of the measure value of the chart Returns ------- int Score describing how different the vis is from the overall vis """ v_filter_size = get_filtered_size(filter_specs, ldf) v_size = len(vis.data) v_filter = vis.data[msr_attribute] total = v_filter.sum() v_filter = v_filter / total # normalize by total to get ratio if total == 0: return 0 # Generate an "Overall" Vis (TODO: This is computed multiple times for every vis, alternative is to directly access df.current_vis but we do not have guaruntee that will always be unfiltered vis (in the non-Filter action scenario)) import copy unfiltered_vis = copy.copy(vis) unfiltered_vis._inferred_intent = utils.get_attrs_specs( vis._inferred_intent) # Remove filters, keep only attribute intent ldf.executor.execute([unfiltered_vis], ldf) v = unfiltered_vis.data[msr_attribute] v = v / v.sum() assert len(v) == len( v_filter), "Data for filtered and unfiltered vis have unequal length." sig = v_filter_size / v_size # significance factor # Euclidean distance as L2 function rankSig = 1 # category measure value ranking significance factor # if the vis is a barchart, count how many categories' rank, based on measure value, changes after the filter is applied if vis.mark == "bar": dimList = vis.get_attr_by_data_model("dimension") # use Pandas rank function to calculate rank positions for each category v_rank = unfiltered_vis.data.rank() v_filter_rank = vis.data.rank() # go through and count the number of ranking changes between the filtered and unfiltered data numCategories = ldf.cardinality[dimList[0].attribute] for r in range(0, numCategories - 1): if v_rank[msr_attribute][r] != v_filter_rank[msr_attribute][r]: rankSig += 1 # normalize ranking significance factor rankSig = rankSig / numCategories from scipy.spatial.distance import euclidean return sig * rankSig * euclidean(v, v_filter)
def limit_population(x, k, target): x.sort(key=lambda e: euclidean(e, target)) return x[:k]
def normalize_skeleton(_skel, mode='coco'): """ Recoordinate the skeleton so that the position of the neck joint is (0,0) or (0,0,0). And normalize shoulder length to 1 Specify 'coco' or 'cmu' for mode coco : _skel.shape = ({frame_num}, 19, 2) or ({frame_num}, 19, 3) cmu : _skel.shape = ({frame_num}, 18, 2) or ({frame_num}, 18, 3) Keyword Arguments: _skel - skeleton pose, unnormalized """ if mode == 'coco': neck_joint_idx = 1 nose_joint_idx = 0 r_shoulder_joint_idx = 2 l_shoulder_joint_idx = 5 elif mode == 'cmu' or mode == 'openpose': neck_joint_idx = 0 nose_joint_idx = 1 r_shoulder_joint_idx = 9 l_shoulder_joint_idx = 3 else: raise AssertionError("Choose 'coco' or 'cmu' for the normalization argument") if mode == 'cmu': new_poses = [] angles = [] shoulder_lengths = [] neck_positions = [] for pose in _skel: shoulder_len = distance.euclidean(pose[neck_joint_idx], pose[l_shoulder_joint_idx]) shoulder_lengths.append(shoulder_len) neck_positions.append(pose[neck_joint_idx]) new_pose = [] for joint in pose: new_pose.append((joint - pose[neck_joint_idx]) / shoulder_len) new_poses.append(new_pose) new_poses = np.array(new_poses) return new_poses, np.array(shoulder_lengths), np.array(neck_positions) elif mode == 'openpose': # ----- Localization ----- localize_pose = [] neck_positions = [] for pose in _skel: tmp = [] for joint in pose: tmp.append(joint - pose[neck_joint_idx]) localize_pose.append(tmp) neck_positions.append(pose[neck_joint_idx]) localize_pose = np.array(localize_pose) neck_positions = np.array(neck_positions) # ----- Scaling ------ # Find the frame with the speaker facing most front face_front_value = 10000 for i in range(len(localize_pose)): # Search by x coordinate of the nose if face_front_value > abs(localize_pose[i][nose_joint_idx][0]): face_front_value = abs(localize_pose[i][nose_joint_idx][0]) face_front_frame = i scale_factor = distance.euclidean(localize_pose[face_front_frame][neck_joint_idx], localize_pose[face_front_frame][l_shoulder_joint_idx]) normalized_pose = [] for pose in localize_pose: tmp = [] for joint in pose: tmp.append(joint / scale_factor) normalized_pose.append(tmp) normalized_pose = np.array(normalized_pose) return normalized_pose, neck_positions, scale_factor else: print("[Error] supecify mode ('coco' or 'cmu' or 'openpose') when normalizing") sys.exit(-1)
def weighted_decision_ish(face_descriptor, descriptors): return min( [distance.euclidean(descr, face_descriptor) for descr in descriptors])
def euc(a, b): #calculate distance return distance.euclidean(a, b)