def main(): from sys import stdin readline = stdin.readline from builtins import max, min, range INF = 10**6 H, W = map(int, readline().split()) Ch, Cw = map(lambda x: int(x) - 1, readline().split()) Dh, Dw = map(lambda x: int(x) - 1, readline().split()) S = [readline()[:-1] for _ in range(H)] t = [[INF] * W for _ in range(H)] for h in range(H): th = t[h] Sh = S[h] for w in range(W): if Sh[w] == '#': th[w] = -1 t[Ch][Cw] = 0 q = deque([(Ch, Cw)]) warp_count = 0 warpq = [] while q: while q: warpq.append(q[0]) h, w = q.popleft() if h - 1 >= 0 and t[h - 1][w] > warp_count: q.append((h - 1, w)) t[h - 1][w] = warp_count if h + 1 < H and t[h + 1][w] > warp_count: q.append((h + 1, w)) t[h + 1][w] = warp_count if w - 1 >= 0 and t[h][w - 1] > warp_count: q.append((h, w - 1)) t[h][w - 1] = warp_count if w + 1 < W and t[h][w + 1] > warp_count: q.append((h, w + 1)) t[h][w + 1] = warp_count if t[Dh][Dw] != INF: break warp_count += 1 for h, w in warpq: for i in range(max(0, h - 2), min(H, h + 3)): ti = t[i] for j in range(max(0, w - 2), min(W, w + 3)): if ti[j] > warp_count: ti[j] = warp_count q.append((i, j)) warpq.clear() if t[Dh][Dw] == INF: print(-1) else: print(t[Dh][Dw])
def numpy_max_pool_nd_stride_pad(input, ws, ignore_border=True, stride=None, pad=None, mode="max"): assert ignore_border nd = len(ws) if pad is None: pad = (0, ) * nd if stride is None: stride = (0, ) * nd assert len(pad) == len(ws) == len(stride) assert all(ws[i] > pad[i] for i in range(nd)) def pad_img(x): # initialize padded input y = np.zeros( x.shape[0:-nd] + tuple(x.shape[-nd + i] + pad[i] * 2 for i in range(nd)), dtype=x.dtype, ) # place the unpadded input in the center block = (slice(None), ) * (len(x.shape) - nd) + tuple( slice(pad[i], x.shape[-nd + i] + pad[i]) for i in range(nd)) y[block] = x return y pad_img_shp = list(input.shape[:-nd]) out_shp = list(input.shape[:-nd]) for i in range(nd): padded_size = input.shape[-nd + i] + 2 * pad[i] pad_img_shp.append(padded_size) out_shp.append((padded_size - ws[i]) // stride[i] + 1) output_val = np.zeros(out_shp) padded_input = pad_img(input) func = np.max if mode == "sum": func = np.sum elif mode != "max": func = np.average inc_pad = mode == "average_inc_pad" for l in np.ndindex(*input.shape[:-nd]): for r in np.ndindex(*output_val.shape[-nd:]): region = [] for i in range(nd): r_stride = r[i] * stride[i] r_end = builtins.min(r_stride + ws[i], pad_img_shp[-nd + i]) if not inc_pad: r_stride = builtins.max(r_stride, pad[i]) r_end = builtins.min(r_end, input.shape[-nd + i] + pad[i]) region.append(slice(r_stride, r_end)) patch = padded_input[l][region] output_val[l][r] = func(patch) return output_val
def sync_node_time(cluster): hosts = C_Host.objects.filter( Q(project_id=cluster.id) & ~Q(name='localhost') & ~Q(name='127.0.0.1') & ~Q(name='::1')) data = [] times = [] result = { 'success':True, 'data':[] } for host in hosts: gmt_date = core.apps.kubeops_api.adhoc.get_host_time(ip=host.ip, port=host.port, username=host.username, password=host.password, private_key_path=host.private_key_path) GMT_FORMAT = '%a %b %d %H:%M:%S CST %Y' date = time.strptime(gmt_date, GMT_FORMAT) timeStamp = int(time.mktime(date)) times.append(timeStamp) show_time = time.strftime('%Y-%m-%d %H:%M:%S', date) time_data = { 'hostname': host.name, 'date': show_time, } data.append(time_data) result['data'] = data max = builtins.max(times) min = builtins.min(times) # 如果最大值减最小值超过5分钟 则判断有错 if (max-min) > 300000: result['success'] = False return result
def _set_bottom(device, target): mode = DeviceModes(device.mode) current_target = device.target if mode is DeviceModes.heat_cool: bottom = min(target, config.max) # go up to target, but don't cross max # keep the top unless needing to shift up to keep {gap} degree distance top = max(current_target.high, bottom + config.gap) new_target = (bottom, top) elif mode is DeviceModes.heat or mode is DeviceModes.cool: new_target = min((target), config.max) # go up to target, but don't cross max else: new_target = current_target device.target = new_target
def __call__(self, *inputvals): assert len(inputvals) == len(self.nondata_inputs) + len( self.data_inputs) nondata_vals = inputvals[0:len(self.nondata_inputs)] data_vals = inputvals[len(self.nondata_inputs):] feed_dict = dict(zip(self.nondata_inputs, nondata_vals)) n = data_vals[0].shape[0] for v in data_vals[1:]: assert v.shape[0] == n for i_start in range(0, n, self.batch_size): slice_vals = [ v[i_start:builtins.min(i_start + self.batch_size, n)] for v in data_vals ] for (var, val) in zip(self.data_inputs, slice_vals): feed_dict[var] = val results = tf.get_default_session().run(self.outputs, feed_dict=feed_dict) if i_start == 0: sum_results = results else: for i in range(len(results)): sum_results[i] = sum_results[i] + results[i] for i in range(len(results)): sum_results[i] = sum_results[i] / n return sum_results
def make_split(datastream, split=0.2): cls = count_classes(datastream) mx = builtins.min(cls.values()) n = int(mx * split) res = list(cls.keys()) | select(lambda c: datastream | where(lambda x: x[ "class_name"] == c) | as_list | pshuffle | take(n)) | chain return res | select(lambda x: basename(x["filename"])) | as_list
def main(): from builtins import int, map, list, print, min from collections import Counter import sys sys.setrecursionlimit(10**6) input = sys.stdin.readline input_str = (lambda: input().rstrip()) input_list = (lambda: input().rstrip().split()) input_number = (lambda: int(input())) input_number_list = (lambda: list(map(int, input_list()))) alphabet = 'abcdefghijklmnopqrstuvwxyz' N = input_number() S = [None] * N for i in range(N): S[i] = list(input_str()) ans = "" for c in alphabet: t = 10**9 for i in range(N): t = min(t, int(Counter(S[i])[c])) ans += c * t print(ans)
def get_mechanics_nearest(lat, lng, skill): result = [] # Get mechanics list within a boundary of .5 around the point sw_lat = lat - 1.5 sw_lng = lng - 1.5 ne_lat = lat + 1.5 ne_lng = lng + 1.5 mechanics_list = get_mechanics_list(sw_lat, sw_lng, ne_lat, ne_lng, skill) if mechanics_list: if len(mechanics_list) == 1: nearest = [0] else: array = [(x[1], x[2]) for x in mechanics_list] kd_tree = spatial.cKDTree(array) nearest = kd_tree.query((lat, lng), min(5, len(array)))[1] for i in nearest: skills = [] pk = mechanics_list[int(i)][0] user = Location.objects.get(pk=mechanics_list[int(i)][0]).user userprofile = user.userprofile name = get_name_from_user(user) miles = "%.1f" % geopy_vincenty(lat, lng, mechanics_list[int(i)][1], mechanics_list[int(i)][2]) bio = userprofile.short_bio icon = get_icon_url_from_user(user) for skill in userprofile.skills.all(): skills.append(skill.skill) rating = userprofile.rating ratingcount = userprofile.rating_count result.append((pk, name, miles, bio, icon, skills, rating, ratingcount)) return result
def _get_range(sfunc, min, max): " Truncate PDFs with long tails" num_tails = int(sfunc.ppf(0) == np.NINF) + int(sfunc.ppf(1) == np.PINF) _range = options['pdf']['range'] if num_tails: if num_tails == 2: range = [(1.0 - _range)/2, (1.0 + _range)/2] else: range = [1.0 - _range, _range] mmin = sfunc.ppf(0) if mmin == np.NINF: mmin = sfunc.ppf(range[0]) mmax = sfunc.ppf(1) if mmax == np.PINF: mmax = sfunc.ppf(range[1]) if min is not None: min = builtins.max(min, mmin) else: min = mmin if max is not None: max = builtins.min(max, mmax) else: max = mmax return min, max
def chunk_columns(df, chunk_size): for start in range(0, df.shape[1], chunk_size): chunk = df.iloc[:, range( start, builtins.min(start + chunk_size, df.shape[1]))] yield chunk
def findOpposite(user, facebookDF): suspects = [] contenders = facebookDF.loc[(facebookDF['userid'] != user) & (facebookDF['likes_received'] > 1000) & (facebookDF['friend_count'] > 1500), ('userid', 'friend_count', 'friendships_initiated', 'likes', 'likes_received', 'age', 'tenure', 'gender')] for index, row in contenders.iterrows(): for indexMe, rowMe in facebookDF.loc[facebookDF['userid'] == user, ('userid', 'friend_count', 'friendships_initiated', 'likes', 'likes_received', 'age', 'tenure', 'gender')].iterrows(): suspects.append([rowMe, row, 0]) for duo in suspects: doppelScore = computeDoppelScore(duo[0], duo[1]) duo[2] = doppelScore scoreList = [] for triplet in suspects: scoreList.append(triplet[2]) minScore = min(scoreList) for triplet in suspects: if triplet[2] == minScore: triplet[0] = {'userid': (triplet[0]['userid'])} triplet[1] = {'userid': (triplet[1]['userid'])} triplet[2] = {'doppelScore': triplet[2]} return triplet
def HPDF(data, min=None, max=None): """ Histogram PDF - initialized with points from a histogram. This function creates a PDF from a histogram. This is useful when some other software has generated a PDF from your data. :param data: A two dimensional array. The first column is the histogram interval mean, and the second column is the probability. The probability values do not need to be normalized. :param min: A minimum value for the PDF range. If your histogram has values very close to 0, and you know values of 0 are impossible, then you should set the ***min*** parameter. :param max: A maximum value for the PDF range. :type data: 2D numpy array :returns: A PDF object. """ x = data[:, 0] y = data[:, 1] sp = interpolate.splrep(x, y) dx = (x[1] - x[0]) / 2.0 mmin = x[0] - dx mmax = x[-1] + dx if min is not None: mmin = builtins.max(min, mmin) if max is not None: mmax = builtins.min(max, mmax) x = np.linspace(mmin, mmax, options['pdf']['numpart']) y = interpolate.splev(x, sp) y[y < 0] = 0 # if the extrapolation goes negative... return PDF(x, y)
def add(self, start, length): assert start >= 0 assert length > 0 #print(" ADD [%d+%d -%d) to %s" % (start, length, start+length, self.dump())) first_overlap = last_overlap = None for i, (s_start, s_length) in enumerate(self._spans): #print(" (%d+%d)-> overlap=%s adjacent=%s" % (s_start,s_length, overlap(s_start, s_length, start, length), adjacent(s_start, s_length, start, length))) if (overlap(s_start, s_length, start, length) or adjacent(s_start, s_length, start, length)): last_overlap = i if first_overlap is None: first_overlap = i continue # no overlap if first_overlap is not None: break #print(" first_overlap", first_overlap, last_overlap) if first_overlap is None: # no overlap, so just insert the span and sort by starting # position. self._spans.insert(0, (start, length)) self._spans.sort() else: # everything from [first_overlap] to [last_overlap] overlapped first_start, first_length = self._spans[first_overlap] last_start, last_length = self._spans[last_overlap] newspan_start = min(start, first_start) newspan_end = max(start + length, last_start + last_length) newspan_length = newspan_end - newspan_start newspan = (newspan_start, newspan_length) self._spans[first_overlap:last_overlap + 1] = [newspan] #print(" ADD done: %s" % self.dump()) self._check() return self
def dijkstra(G): """ Dijkstra algorithm for finding shortest path from start position to end. """ srcIdx = G.vex2idx[G.startpos] dstIdx = G.vex2idx[G.endpos] # build dijkstra nodes = list(G.neighbors.keys()) dist = {node: float('inf') for node in nodes} prev = {node: None for node in nodes} dist[srcIdx] = 0 while nodes: curNode = min(nodes, key=lambda node: dist[node]) nodes.remove(curNode) if dist[curNode] == float('inf'): break for neighbor, cost in G.neighbors[curNode]: newCost = dist[curNode] + cost if newCost < dist[neighbor]: dist[neighbor] = newCost prev[neighbor] = curNode # retrieve path path = deque() curNode = dstIdx while prev[curNode] is not None: path.appendleft(G.vertices[curNode]) curNode = prev[curNode] path.appendleft(G.vertices[curNode]) return list(path)
def sync_node_time(cluster): hosts = C_Host.objects.filter( Q(project_id=cluster.id) & ~Q(name='localhost') & ~Q(name='127.0.0.1') & ~Q(name='::1')) data = [] times = [] result = {'success': True, 'data': []} for host in hosts: ssh_config = SshConfig(host=host.ip, port=host.port, username=host.username, password=host.password, private_key=None) ssh_client = SSHClient(ssh_config) res = ssh_client.run_cmd('date') gmt_date = res[0] GMT_FORMAT = '%a %b %d %H:%M:%S CST %Y' date = time.strptime(gmt_date, GMT_FORMAT) timeStamp = int(time.mktime(date)) times.append(timeStamp) show_time = time.strftime('%Y-%m-%d %H:%M:%S', date) time_data = { 'hostname': host.name, 'date': show_time, } data.append(time_data) result['data'] = data max = builtins.max(times) min = builtins.min(times) # 如果最大值减最小值超过5分钟 则判断有错 if (max - min) > 300000: result['success'] = False return result
def test_more_hypothesis(self, peers, shares): """ similar to test_unhappy we test that the resulting happiness is always either the number of peers or the number of shares whichever is smaller. """ # https://hypothesis.readthedocs.io/en/latest/data.html#hypothesis.strategies.sets # hypothesis.strategies.sets(elements=None, min_size=None, average_size=None, max_size=None)[source] # XXX would be nice to paramaterize these by hypothesis too readonly_peers = set() peers_to_shares = {} places = happiness_upload.share_placement(peers, readonly_peers, set(list(shares)), peers_to_shares) happiness = happiness_upload.calculate_happiness(places) # every share should get placed assert set(places.keys()) == shares # we should only use peers that exist assert set(places.values()).issubset(peers) # if we have more shares than peers, happiness is at most # of # peers; if we have fewer shares than peers happiness is capped at # # of peers. assert happiness == min(len(peers), len(shares))
def _get_range(sfunc, min, max): " Truncate PDFs with long tails" num_tails = int(sfunc.ppf(0) == np.NINF) + int(sfunc.ppf(1) == np.PINF) _range = options['pdf']['range'] if num_tails: if num_tails == 2: range = [(1.0 - _range) / 2, (1.0 + _range) / 2] else: range = [1.0 - _range, _range] mmin = sfunc.ppf(0) if mmin == np.NINF: mmin = sfunc.ppf(range[0]) mmax = sfunc.ppf(1) if mmax == np.PINF: mmax = sfunc.ppf(range[1]) if min is not None: min = builtins.max(min, mmin) else: min = mmin if max is not None: max = builtins.min(max, mmax) else: max = mmax return min, max
def min(iterable, **kwargs): warnings.warn( "pipe.min is deprecated, use the builtin min() instead.", DeprecationWarning, stacklevel=4, ) return builtins.min(iterable, **kwargs)
def new_vertex(randvex, nearvex, stepSize): dirn = np.array(randvex) - np.array(nearvex) length = np.linalg.norm(dirn) dirn = (dirn / length) * min(stepSize, length) newvex = (nearvex[0] + dirn[0], nearvex[1] + dirn[1], nearvex[2] + dirn[2]) return newvex
def drawSet(set, dom): seq = [item['value'] for item in set] max = builtins.max(seq) min = builtins.min(seq) height = int(dom.getAttribute("SVG", "height")) min = 0 if min > 0 else min svg = atlastk.createHTML() for i in range(len(set)): svg.pushTag("rect") svg.putAttribute("x", str(i * 100 / len(set)) + "%") svg.putAttribute("y", height - set[i]["value"] * height / max) svg.putAttribute("width", str(100 / len(set)) + "%") svg.putAttribute("height", str(100 * set[i]["value"] / max) + "%") svg.putTagAndValue("title", set[i]["date"] + " : " + str(set[i]["value"])) svg.popTag() dom.inner("SVG", svg) dom.setValue("Text", set[0]["date"] + " - " + set[len(set) - 1]["date"])
def getKeyRange(m): low = 0xFFFFFFFF high = 0 for key in m: iKey = int(key) low = min(iKey, low) high = max(iKey, high) return Range(low, high)
def __init__(self, point_map, original, generated): self.generated = generated self.original = original self.point_map = point_map self.width = len(point_map[0]) self.height = len(point_map) self.cell_size = int(min(MAX_X / self.width, MAX_Y / self.height)) self.offset = self.cell_size / 2
def __init__(self, *args: ty.Optional[int]): s = builtins.slice(*args) start, stop, step = ( s.start or 0, s.stop or builtins.min(s.stop, MAX_RANGE), s.step or 1, ) self._it = builtins.iter(builtins.range(start, stop, step))
def shortest_path(edge, mid_contour, w, h): pix_val = [] half_cols = w // 2 total_cols = w total_rows = h # range goes from halfway through the x direction and # the whole way in the y direction for i in range(half_cols): for j in range(total_rows): g = edge[j][i][1] if g == 255: pix_val.append([i, j]) min_distance = float("inf") val_x1, val_y1, val_x2, val_y2 = -1, -1, -1, -1 for coor in pix_val: col, row = coor[0], coor[1] theta = math.atan2((mid_contour[0] - col), (mid_contour[1] - row)) for radius in range(int(min(total_rows, total_cols) / 2)): new_col = min(mid_contour[0] + math.sin(theta) * radius, total_cols - 1) new_row = min(mid_contour[1] + math.cos(theta) * radius, total_rows - 1) # print((new_col, new_row)) g = edge[int(new_row)][int(new_col)][1] if g == 255: dist = distance(new_col, new_row, col, row) if dist < min_distance: val_x1 = col val_y1 = row val_x2 = new_col val_y2 = new_row min_distance = dist cv2.circle(edge, (int(val_x1), int(val_y1)), 5, (255, 0, 255), -1) cv2.circle(edge, (int(val_x2), int(val_y2)), 5, (255, 0, 255), -1) cv2.circle(edge, (int(mid_contour[0]), int(mid_contour[1])), 5, (255, 0, 255), -1) cv2.imshow("points", edge) cv2.waitKey(0) return "Shortest path: ", val_x1, val_y1, " to ", val_x2, val_y2, " Distance: ", min_distance
def get_best(self, data_name, is_larger_better=True): if data_name in self: steps, values = zip(*self[data_name].copy()) if is_larger_better: return builtins.max(values) else: return builtins.min(values) else: raise ValueError('{0} is not in this History.'.format(data_name))
def minmax(cp, size): _check_params(len(cp), size) min_sample, max_sample = 0x7fffffff, -0x80000000 for sample in _get_samples(cp, size): max_sample = builtins.max(sample, max_sample) min_sample = builtins.min(sample, min_sample) return min_sample, max_sample
def on_loss_calculation_end(self, training_context): """Returns mixed inputs, pairs of targets, and lambda""" train_data = training_context['train_data'] x = None y = None x = train_data.value_list[0].copy().detach() # input y = train_data.value_list[1].copy().detach() # label model = training_context['current_model'] lam = builtins.min( builtins.max(np.random.beta(self.alpha, self.alpha), 0.3), 0.7) batch_size = int_shape(x)[0] index = arange(batch_size) index = cast(shuffle(index), 'long') this_loss = None mixed_x = None if get_backend() == 'pytorch': mixed_x = lam * x + (1 - lam) * x[index, :] pred = model(to_tensor(mixed_x, requires_grad=True)) y_a, y_b = y, y[index] this_loss = lam * self.loss_criterion(pred, y_a.long()) + ( 1 - lam) * self.loss_criterion(pred, y_b.long()) elif get_backend() == 'tensorflow': x1 = tf.gather(x, index, axis=0) y1 = tf.gather(y, index, axis=0) mixed_x = lam * x + (1 - lam) * x1 pred = model(to_tensor(mixed_x, requires_grad=True)) y_a, y_b = y, y1 this_loss = lam * self.loss_criterion( pred, y_a) + (1 - lam) * self.loss_criterion(pred, y_b) training_context['current_loss'] = training_context[ 'current_loss'] + this_loss * self.loss_weight if training_context['is_collect_data']: training_context['losses'].collect( 'mixup_loss', training_context['steps'], float(to_numpy(this_loss * self.loss_weight))) if training_context['current_batch'] == 0: for item in mixed_x: if self.save_path is None and not is_in_colab(): item = unnormalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])(to_numpy(item)) item = unnormalize(0, 255)(item) array2image(item).save('Results/mixup_{0}.jpg'.format( get_time_suffix())) elif self.save_path is not None: item = unnormalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])(to_numpy(item)) item = unnormalize(0, 255)(item) array2image(item).save( os.path.join(self.save_path, 'mixup_{0}.jpg'.format( get_time_suffix())))
def auto_canny(image, sigma=0.33): # compute the median of the single channel pixel intensities v = np.median(image) # apply automatic Canny edge detection using the computed median lower = int(max(0, (1.0 - sigma) * v)) upper = int(min(255, (1.0 + sigma) * v)) edged = cv2.Canny(image, lower, upper) # return the edged image return edged
def __init__(self, *args: ty.Optional[int], elements: int = 1): s = builtins.slice(*args) start, stop, step = ( s.start or 0, s.stop or builtins.min(s.stop, MAX_RANGE), s.step or 1, ) self._it = builtins.tuple( builtins.iter(builtins.range(start, stop, step)) for _ in builtins.range(elements) )
def img_op(image: np.ndarray, **kwargs): image = np.clip(image, 0.0, 255.0) gammamin, gammamax = gamma_range avg_pix = image.mean() if avg_pix > 220: gammamax = builtins.max(gammamin, 1) elif avg_pix < 30: gammamin = builtins.min(1, gammamax) gamma = np.random.choice(np.arange(gammamin, gammamax, 0.01)) return exposure.adjust_gamma(image / 255.0, gamma) * 255.0
def overlap(start0, length0, start1, length1): # return start2,length2 of the overlapping region, or None # 00 00 000 0000 00 00 000 00 00 00 00 # 11 11 11 11 111 11 11 1111 111 11 11 left = max(start0, start1) right = min(start0 + length0, start1 + length1) # if there is overlap, 'left' will be its start, and right-1 will # be the end' if left < right: return (left, right - left) return None
def main(): from builtins import int, map, list, print, min import sys sys.setrecursionlimit(10**6) input = sys.stdin.readline input_list = (lambda: input().rstrip().split()) input_number = (lambda: int(input())) input_number_list = (lambda: list(map(int, input_list()))) K, N = input_number_list() A = input_number_list() A.sort() ans = A[-1] - A[0] for i in range(1, N - 1): l = A[i] + (K - A[i + 1]) r = K - A[i] + A[i - 1] ans = min(ans, l, r) ans = min(ans, K - A[-1] + A[-2]) print(ans)
def min(*args): """Override the builtin min function to expand list arguments. Arguments: *args -- lists of numbers or individual numbers """ fullList = [] for arg in args: if hasattr(arg, 'extend'): fullList.extend(arg) else: fullList.append(arg) if not fullList: return 0 return builtins.min(fullList)
def __call__(self, *inputvals): assert len(inputvals) == len(self.nondata_inputs) + len(self.data_inputs) nondata_vals = inputvals[0:len(self.nondata_inputs)] data_vals = inputvals[len(self.nondata_inputs):] feed_dict = dict(zip(self.nondata_inputs, nondata_vals)) n = data_vals[0].shape[0] for v in data_vals[1:]: assert v.shape[0] == n for i_start in range(0, n, self.batch_size): slice_vals = [v[i_start:builtins.min(i_start + self.batch_size, n)] for v in data_vals] for (var, val) in zip(self.data_inputs, slice_vals): feed_dict[var] = val results = tf.get_default_session().run(self.outputs, feed_dict=feed_dict) if i_start == 0: sum_results = results else: for i in range(len(results)): sum_results[i] = sum_results[i] + results[i] for i in range(len(results)): sum_results[i] = sum_results[i] / n return sum_results
def _nanmin(values, axis=None, skipna=True): values, mask, dtype = _get_values(values, skipna, fill_value_typ = '+inf') # numpy 1.6.1 workaround in Python 3.x if (values.dtype == np.object_ and sys.version_info[0] >= 3): # pragma: no cover import builtins if values.ndim > 1: apply_ax = axis if axis is not None else 0 result = np.apply_along_axis(builtins.min, apply_ax, values) else: result = builtins.min(values) else: if ((axis is not None and values.shape[axis] == 0) or values.size == 0): result = com.ensure_float(values.sum(axis)) result.fill(np.nan) else: result = values.min(axis) result = _wrap_results(result,dtype) return _maybe_null_out(result, axis, mask)
def min2(*args): return builtins.min(filter(lambda x: x is not None, args), default=None)
def min(iterable, **kwargs): return builtins.min(iterable, **kwargs)
def min(expr, pos, value, **kwargs): return builtins.min(value, **kwargs)
def min(a, b): """Returns the minimum of a and b.""" return builtins.min(a, b)