def __init__(self): self.__nodes = {} # Dict of nodes self.__root = 0 # Root TID - Requester's task self.__count = 0 # tid count generator self.__q = pq() # Priority Queue self.__sq = pq() # sapper Priority Queue self.__qlen = 0 # Queue length self.__sqlen = 0 # sapper Queue length self.__atasks= {} # Active leaf tasks self.__amt = 0 # Total cash allocated for the task
class MyTest(unittest.TestCase): t = task_scheduler.TaskScheduler() taskQueue = pq() task = task_scheduler.task.Task("task1", "Y", 1, 100) task1 = task_scheduler.task.Task("task1", "Y", 4, 100) processor = task_scheduler.processor.Processor("compute1", 2) processorFreeQueue = pq() processorBusyQueue = pq() def test_processorCoresNotAvailable(self): self.processorFreeQueue.put(self.processor) self.assertEqual(processor_manager.ProcessorManager(self.processorFreeQueue, self.processorBusyQueue, task_manager.TaskManager(self.taskQueue.put(self.task1))).getBestAvailableFreeProcessor(self.task1), -1) def test_processorAvailable(self): self.processorFreeQueue.put(self.processor) self.assertIsInstance(processor_manager.ProcessorManager(self.processorFreeQueue, self.processorBusyQueue, task_manager.TaskManager(self.taskQueue.put(self.task))).getBestAvailableFreeProcessor(self.task), processor.Processor)
def _enumerate_element(self,element,evidence,cache): p,s = element if s.is_false_sdd: return last = 0 if not last in cache: queue = pq() model = EnumerateModel(evidence=evidence,element=element) queue.put(model) cache[last] = queue while True: item = cache[last] if type(item) is tuple: last += 1 yield item else: # type is priority queue queue = item if queue.empty(): break model = queue.get() val,inst = model.val_inst() cache[last] = val,inst pnext = model.pnext() if pnext is not None: queue.put(pnext) snext = model.snext() if snext is not None: queue.put(snext) last += 1 cache[last] = queue yield val,inst
def bfs(grid, unit): startR, startC = getCoord(grid, unit) q = pq() dist = {} prev = {} q.put((0, startR, startC)) dist[(startR, startC)] = 0 prev[(startR, startC)] = (startR, startC) while not q.empty(): di, r, c = q.get() for d in range(4): nr = r + dr[d] nc = c + dc[d] if isUnit(grid[nr][nc] ) and kind[grid[nr][nc]] != kind[grid[startR][startC]]: while dist[(r, c)] > 1: r, c = prev[(r, c)] return r, c for d in range(4): nr = r + dr[d] nc = c + dc[d] if grid[nr][nc] == '.': if (nr, nc) not in dist or di + 1 < dist[(nr, nc)]: dist[(nr, nc)] = di + 1 prev[(nr, nc)] = (nr, nc) if di == 0 else prev[(r, c)] q.put((di + 1, nr, nc)) elif di + 1 == dist[(nr, nc)]: prev[(nr, nc)] = min(prev[(nr, nc)], prev[(r, c)]) return startR, startC
class TaskManager(object): taskQueue = pq() def __init__(self, taskQueue): self.taskQueue = taskQueue # This method returns the task in queue which is ready to be executed def getNextTask(self): if not self.taskQueue.empty(): task = self.taskQueue.get(block=True, timeout=None) if task.status == 'Y' or task.status == 'S' or task.status == 'D': return task else: self.taskQueue.put(task) return None return None def addTaskToQueue(self, task): self.taskQueue.put(task) # This method marks the task as complete when ticks become 0 and makes the status of task that depend on it to ready def markTaskAsCompleteAndUpdateDependencies(self, task): self.markTaskAsComplete(task) self.updatePostTasks(task) def updatePostTasks(self, task): if task.getPostReq() != None: for postTask in task.getPostReq(): self.updatePreTask(postTask) def updatePreTask(self, task): for preReqTask in task.getPreReq(): if preReqTask.getStatus() == 'S': self.markTaskAsDiscarded(task) return if preReqTask.getStatus() != 'C': return self.markTaskAsReady(task) def markTaskAsReady(self, task): task.setStatus("Y") def markTaskAsComplete(self, task): task.setStatus("C") def markTaskAsDiscarded(self, task): task.setStatus("S") print task.name + " cannot be assigned because resources required exceeds available resources or parent task has failed execution!" def markTaskAsDeadlocked(self, task): task.setStatus('D') print task.name + " cannot be scheduled because of deadlock!"
def _enumerate_decomposition(self,evidence): # set up cache for elements if self.data is None: self.data = {} for element in self.elements: self.data[element] = {} queue = pq() for element in self.elements: theta = self.theta[element] cache = self.data[element] eit = self._enumerate_element(element,evidence,cache) self._enumerate_update_queue(theta,eit,queue) while not queue.empty(): val,inst,theta,eit = queue.get() self._enumerate_update_queue(theta,eit,queue) yield (-val,inst)
def rank_nodes(num_new): num_pre = G.number_of_nodes() - num_new num_modify = params_dynamic['num_modify'] if num_modify == 0: return delta_list = [0.0] * num_pre for u, v in G.edges(): if u >= num_pre or v >= num_pre: continue delta_list[u] += float(G[u][v]['weight']) * abs(G[u][v]['delta']) delta_list[v] += float(G[u][v]['weight']) * abs(G[u][v]['delta']) for u in G: if u >= num_pre: continue delta_list[u] /= (G.node[u]['in_degree'] + G.node[u]['out_degree']) q = pq() for u in G: if u >= num_pre: continue if q.qsize() < num_modify: q.put_nowait((delta_list[u], u)) continue items = q.get_nowait() if items[0] < delta_list[u]: q.put_nowait((delta_list[u], u)) else: q.put_nowait(items) idx = num_pre - 1 while not q.empty(): u = q.get_nowait()[1] um = mapp[u] v = rmapp[idx] mapp[u] = idx rmapp[idx] = u mapp[v] = um rmapp[um] = v embeddings[[um, idx], :] = embeddings[[idx, um], :] weights[[um, idx], :] = weights[[idx, um], :] idx -= 1
def mergeKLists(self, lists): """ :type lists: List[ListNode] :rtype: ListNode """ p = pq() for root in lists: if root: p.put((root.val,root)) dummy = ListNode(0) iterate = dummy while not p.empty(): last_val = p.get() iterate.next = ListNode(last_val[0]) change = last_val[1] if change.next: p.put((change.next.val,change.next)) iterate = iterate.next return dummy.next
def computeAnswer(senatorCounts): senatorQueue = pq() answer = "" # put all senators on the queue, then pop off the largest size. # put them on the queue by the negative size since python pqueues are mins for index in xrange(len(senatorCounts)): senatorQueue.put((-1 * senatorCounts[index], chr(index + 65))) numSenatorsLeft = sum(senatorCounts) while numSenatorsLeft > 0: firstSenator = removeOneSenator(senatorQueue) if numSenatorsLeft != 3: secondSenator = removeOneSenator(senatorQueue) exitString = firstSenator + secondSenator numSenatorsLeft -= 2 else: exitString = firstSenator numSenatorsLeft -= 1 answer += " " + exitString return answer.strip()
def __init__(self, env, robot, goal, heuristicType, noNeighors=4): # defining environment,robot and start ans goal configurations # reference: goalconfig = [2.6,-1.3,-pi/2] and startconfig = [-3.4,-1.4,0] self.env = env self.robot = robot self.goal = goal self.start = robot.GetTransform()[:2, 3].tolist() self.start.append(0) self.robot.SetActiveDOFValues(self.start) # self.start.append(goal[2]) self.startTransform = robot.GetTransform() # defining step size i.e, minimum dist robot can translate and rotate # fronter is defined. self.stepSize = 0.1 self.angleStep = -pi / 4 # openlist is frontier and closedlist is explored self.frontier = pq(maxsize=0) self.explored = {} self.stepCost = 0.1 self.noNeighors = noNeighors self.heuristicType = heuristicType self.cost_so_far = {} # defining start node self.startNode = Node(self.start, None, 0, self._h(self.start)) self.frontier.put(self.startNode) self.cost_so_far = {self.startNode: 0} self.came_from = {self.startNode: None} self.algorithm() self.robot.SetActiveDOFValues(self.start)
def get_user_info(request): if not (request.is_ajax() and request.POST): raise Http404 n_suggest = 5 similarity_thresh = 0.3 result = [] heap = pq(maxsize=n_suggest) uid = request.POST.get('uid') print 'uid:', uid r = redis.StrictRedis(host='10.0.0.12', port=6379, password='') try: g = find_group(r, uid, 10) g2 = find_group(r, uid, 200, wd=False, wc=True) m = generate_matrix(r, g) group_interests = jaccard_first_member(m)[0] for i, u in enumerate(g[1:]): if group_interests[i + 1] >= similarity_thresh: if heap.full(): if heap.queue[0][0] < group_interests[i + 1]: heap.get() heap.put((group_interests[i + 1], u[0], u[1])) else: heap.put((group_interests[i + 1], u[0], u[1])) # result += [(u[0], u[1],group_interests[i+1])] while not heap.empty(): a = heap.get() result += [(a[1], a[2], a[0])] result = result[::-1] f = {'uid': uid, 'result': result[:5], 'group': g, 'more_people': g2} except: f = f = {'uid': uid, 'result': [], 'group': [], 'more_people': []} data = json.dumps(f) return HttpResponse(data, content_type='application/json')
l = [1, 2, 3, 4, 5, 6, 8, 7, 5, 32] from Queue import PriorityQueue as pq pq1 = pq() pq2 = pq() print(pq1)
from Node import * from Queue import PriorityQueue as pq import math, numpy, rospy import time from nav_msgs.msg import GridCells from geometry_msgs.msg import Twist, Point, Pose, PoseStamped, PoseWithCovarianceStamped import copy import rospy from lab3_grid_cells import run # closed list is a list called explored, # containing all the nodes the algorithm has expanded explored = [] # open list is a priority queue called frontier, # containing the unexplored children of frontier nodes frontier = pq() frontierDisplay = [] # create an empty list to hold nodes def astar(startNode, goal, mapData, grid, startPos): """ """ # Calculate travel distance from one cell to another # d1 is the orthognal distance from one cell to another d1 = grid.cell_width # d2 is the diagonal distance from one cell to another d2 = math.sqrt(2 * (d1**2)) # Add the starting node to the frontier frontier.put(startNode)
def init(params, info, **kwargs): res = params_handler(params, info) p = ct.obj_dic(params) G = gh.load_unweighted_digraph(p.network_path, p.is_directed) info["num_edges"] = len(G.edges()) # top-k nodes q = pq() for idx, u in enumerate(G): if idx < p.num_top: q.put_nowait((G.node[u]["in_degree"], u)) else: tmp = q.get_nowait() if tmp[0] <= G.node[u]["in_degree"]: q.put_nowait((G.node[u]["in_degree"], u)) else: q.put_nowait(tmp) top_lst = [] top_set = set() while not q.empty(): top_lst.append(q.get_nowait()[1]) top_set.add(top_lst[-1]) print "top_lst: " + str(top_lst) node_lst = [] for u in G: if u not in top_set: node_lst.append(u) remain_size = len(node_lst) num_community = (remain_size + p.community_bound - 1) // p.community_bound num_community_large = remain_size % num_community num_community_small = num_community - num_community_large community_size_small = remain_size // num_community community_size_large = community_size_small + 1 #print remain_size, num_community, num_community_small, num_community_large, community_size_small, community_size_large topk_params = { "embeddings": pi.initialize_embeddings(p.num_top, p.dim), "weights": pi.initialize_weights(p.num_top, p.dim), "in_degree": [G.node[i]["in_degree"] for i in top_lst], "out_degree": [G.node[i]["out_degree"] for i in top_lst], "map": {i: top_lst[i] for i in xrange(len(top_lst))} } #print topk_params with io.open(os.path.join(p.res_path, "topk_info.pkl"), "wb") as f: pickle.dump(topk_params, f) def deal_subgraph(idx, st, ed): sub_params = { "embeddings": pi.initialize_embeddings(ed - st, p.dim), "weights": pi.initialize_weights(ed - st, p.dim), "in_degree": [G.node[node_lst[st + i]]["in_degree"] for i in xrange(ed - st)], "out_degree": [G.node[node_lst[st + i]]["out_degree"] for i in xrange(ed - st)], "map": {i: node_lst[st + i] for i in xrange(ed - st)} } #print sub_params with io.open(os.path.join(p.res_path, "%d_info.pkl" % idx), "wb") as f: pickle.dump(sub_params, f) for i in xrange(num_community_small): deal_subgraph(i, i * community_size_small, (i + 1) * community_size_small) tmp = num_community_small * community_size_small for i in xrange(num_community_small, num_community): deal_subgraph( i, tmp + (i - num_community_small) * community_size_large, tmp + (i - num_community_small + 1) * community_size_large) info["num_community"] = num_community info["num_community_small"] = num_community_small info["num_community_large"] = num_community_large info["community_size_small"] = community_size_small info["community_size_large"] = community_size_large #print info # calculate prob def cal_q1(): K = float(num_community) nl = float(community_size_small) nr = nl + 1 n = float(p.num_nodes - p.num_top) nh = float(community_size_large) Kl = float(num_community_small) Kh = float(num_community_large) return Kl * nl / n * (nl - 1) / (n - 1) + Kh * nh / n * (nh - 1) / (n - 1) info["q"] = [cal_q1(), 1.0, float(num_community) if p.q2 is None else p.q2] tmp = p.num_nodes - p.num_top info["Z"] = [0.0, info["q"][0] * tmp * tmp + \ 2.0 * tmp * p.num_top + info["q"][2] * p.num_top * p.num_top] info["num_topk_edges"] = 0 for e in G.edges(): if e[0] in top_set and e[1] in top_set: info["Z"][0] += info["q"][2] info["num_topk_edges"] += 1 elif e[0] in top_set or e[1] in top_set: info["Z"][0] += 1 else: info["Z"][0] += info["q"][0] info["total_degree"] = G.graph["degree"] info["num_community"] = num_community res["data_path"] = p.res_path print "Info: ", info["q"], info["Z"] #print "End!!" return res
def pfactor_gen(size): #for each number n, return some factor of it. primes = [2] stuff = [0, 1, 2] + [1, 2] * (size / 2 - 1) for i in xrange(3, size, 2): if stuff[i] == 1: primes.append(i) for j in xrange(i**2, size, i * 2): stuff[j] = i if len(primes) > 500500: break return primes primes = pfactor_gen(SIZE) divisors = pq() solution = 1 for prime in primes: divisors.put(prime) for i in range(NUM_DIV): num = divisors.get() solution = (num * solution) % MOD if num < SIZE: divisors.put(num**2) print solution print "Time Taken:", time.time() - START """
#NOTE needs cleaning import time start = time.time() from Queue import PriorityQueue as pq items = pq() items.put( (0,-1,{1}) ) SIZE = 200 vals = [3000] * (SIZE+1) vals[0] = 0 while 3000 in vals: item = items.get() priority = item[0] new_priority = priority+1 old_val = -1*item[1] # reason for making it negative is due to elem = item[2] #implementation of priority queue. Takes smallest first. if old_val > SIZE: continue if vals[old_val] > priority: vals[old_val] = priority for i in elem: new_val = i+old_val if new_val > SIZE or vals[new_val] != 3000: continue new_set = set(list(elem)) new_set.add(new_val) items.put( (new_priority,-1*new_val,new_set) ) print sum(vals)
class ProcessorManager(object): maxCoresAvailable = 0 processorFreeQueue = pq() processorBusyQueue = pq() taskManager = task_manager def __init__(self, processorFreeQueue, processorBusyQueue, taskManager): self.processorFreeQueue = processorFreeQueue self.processorBusyQueue = processorBusyQueue self.taskManager = taskManager # Method to get processor for a task that is ready to be executed def getBestAvailableFreeProcessor(self, task): count = 0 removed = [] freeProcessors = self.processorFreeQueue._qsize(len) while count < freeProcessors: processor = self.processorFreeQueue.get() if processor.getCore() >= task.getCore(): for busyprocessors in removed: self.processorFreeQueue.put(busyprocessors) return processor # If requirements for task exceed available resources, we discard the task and tasks dependent on it elif task.getCore() > self.maxCoresAvailable: removed.append(processor) self.taskManager.markTaskAsDiscarded(task) self.taskManager.updatePostTasks(task) self.updateProcessorInformation(processor) return -1 else: removed.append(processor) count += 1 self.maintainFreeQueue(removed) return None # Allot task to processor and start execution def allotTaskToProcessor(self, task, processor): self.processorBusyQueue.put(processor) processor.setTask(task) processor.setRemainingTicks(task.getTicks()) print "Task : " + task.getName( ) + " , Processor : " + processor.getName() # Simulating ticks for a process def decrementTicks(self): count = 0 removed = [] busyProcessors = self.processorBusyQueue._qsize(len) while count < busyProcessors: processor = self.processorBusyQueue.get_nowait() processor.decrementTick(processor.getCore()) removed.append(processor) count += 1 self.maintainBusyQueue(removed) # Methods to handle freeing of processor, marking tasks as complete when execution has finished (tick remaining is 0) def markTaskAsCompleted(self, task): self.taskManager.markTaskAsCompleteAndUpdateDependencies(task) def checkForCompletedTaskAndUpdate(self): count = 0 removed = [] busyProcessors = self.processorBusyQueue._qsize(len) while count < busyProcessors: processor = self.processorBusyQueue.get() if processor.getRemainingTicks() == 0: self.markTaskAsCompleted(processor.getTask()) self.updateProcessorInformation(processor) else: removed.append(processor) count += 1 self.maintainBusyQueue(removed) def updateProcessorInformation(self, processor): processor.setTask(None) self.addProcessorToProcessorFreeQueue(processor) def addProcessorToProcessorFreeQueue(self, processor): self.processorFreeQueue.put(processor) def removeProcessorFromProcessorBusyQueue(self, processor): processor.processorBusyQueue.get() def runningTasks(self): if self.processorBusyQueue._qsize(len) > 0: return True else: return False def maintainFreeQueue(self, removedProcessors): for busyprocessors in removedProcessors: self.processorFreeQueue.put(busyprocessors) def maintainBusyQueue(self, removedProcessors): for busyprocessors in removedProcessors: self.processorBusyQueue.put(busyprocessors)
class TaskScheduler(object): taskQueue = pq() processorQueue = processor processorFreeQueue = pq() processorBusyQueue = pq() taskManager = task_manager.TaskManager(taskQueue) processorManager = processor_manager.ProcessorManager( processorFreeQueue, processorBusyQueue, taskManager) # Read yaml file and create list of available computing resources def getProcessorList(self, filePath): try: processorsFile = open(filePath) except IOError: print "File not found. Please specify the correct path!" exit(1) else: processorsData = yaml.load(processorsFile) if processorsData != None: for processor, cores in processorsData.iteritems(): if processor_manager.ProcessorManager.maxCoresAvailable < cores: processor_manager.ProcessorManager.maxCoresAvailable = cores self.processorFreeQueue.put(self.processorQueue.Processor( processor, cores), block=True, timeout=None) else: print "Input file is empty. Please include computing resources present!" exit(1) processorsFile.close() # Read yaml file that contains task list def getTaskList(self, filePath): taskName = "" cores, ticks = 0, 0 status = "Y" taskObjects = {} taskMap = {} try: taskFile = open(filePath) except IOError: print "File not found. Please specify the correct path!" exit(1) else: fileData = yaml.load(taskFile) # Updating task queue with various details of task like cores needed, execution time, parent tasks if fileData != None: for task, details in fileData.iteritems(): status = "Y" taskName = task.lower() if 'cores_required' in details: if bool(details['cores_required']): if int(details['cores_required']) > 0: cores = int(details['cores_required']) else: cores = 1 else: cores = 1 else: cores = 1 if 'execution_time' in details: if bool(details['execution_time']): if int(details['execution_time']) > 0: ticks = int(details['execution_time']) else: ticks = 100 else: ticks = 100 else: ticks = 100 if 'parent_tasks' in details: if (bool(details['parent_tasks'])): status = "N" taskMap[taskName] = details['parent_tasks'].lower() task = self.getTask(taskName, status, cores, ticks) self.taskQueue.put(task, block=True, timeout=False) taskObjects[taskName] = task if status == 'N': self.addDependentTasks(taskMap, taskObjects, task, details['parent_tasks'].lower()) for task in taskObjects.itervalues(): if len(task.PreReq) > 0: for i, taskString in enumerate(task.PreReq): task.getPreReq().append(taskObjects[taskString]) taskObjects[taskString].getPostReq().append(task) for parenttask in taskObjects.itervalues(): if len(parenttask.getPreReq()) > 0 and len( parenttask.getPostReq()) > 0: for task in parenttask.getPreReq(): if parenttask in task.getPreReq(): task.setStatus('D') parenttask.setStatus('D') else: print "Input file is empty. Please include tasks to be processed!" exit(1) taskFile.close() # If tasks depend on some other tasks, we create a list of dependencies called Post-requirements and Pre-requirements def addDependentTasks(self, taskMap, taskObjects, task, dependentTasks): if dependentTasks != None: for taskName in dependentTasks.split(","): task.PreReq.append(taskName.strip()) def getTask(self, taskName, dependent, core, ticks): return task.Task(taskName, dependent, core, ticks) def __str__(self): return "" # This function will execute till the taskQueue is not empty and processor busy queue is not empty def executeTasks(self): while (True): while (True): if not self.processorFreeQueue.empty(): task = self.taskManager.getNextTask() if task == None: break if task.status == 'S': self.taskManager.updatePostTasks(task) break if task.status == 'D': self.taskManager.markTaskAsDeadlocked(task) break p = self.processorManager.getBestAvailableFreeProcessor( task) if p == None: self.taskManager.addTaskToQueue(task) break elif p == -1: break else: self.processorManager.allotTaskToProcessor(task, p) else: self.processorManager.decrementTicks() self.processorManager.checkForCompletedTaskAndUpdate() self.processorManager.decrementTicks() self.processorManager.checkForCompletedTaskAndUpdate() if (not self.processorManager.runningTasks() ) and self.taskQueue.empty(): break exit
def __init__(self ): self.event_queue = pq()
MOD = 500500507 def pfactor_gen(size): #for each number n, return some factor of it. primes = [2] stuff = [0,1,2] + [1,2] *(size/2-1) for i in xrange(3,size,2): if stuff[i] == 1: primes.append(i) for j in xrange(i**2,size,i*2): stuff[j] = i if len(primes) > 500500: break return primes primes = pfactor_gen(SIZE) divisors = pq() solution = 1 for prime in primes: divisors.put(prime) for i in range(NUM_DIV): num = divisors.get() solution = (num * solution) % MOD if num < SIZE: divisors.put(num**2) print solution print "Time Taken:", time.time() - START