def subhplot(hplot): # create histogram plots hh = hplot[0] xmin = hplot[1][0] xmax = hplot[1][1] xtipos = hplot[1][2] ymin = hplot[2][0] ymaxf = hplot[2][1] fmx = hplot[2][2] ytiposf = hplot[2][3] hname = hplot[3] sbp = hplot[4] nsub = numpy.mod(sbp,10) pylab.plot(hh[1],hh[0],linestyle='None',linewidth=0.1 ) yyl = numpy.log10(max(hh[0])) ylf = numpy.fix(yyl) ymax = (numpy.round(10**(yyl - ylf)))*(10**ylf) ystep=numpy.round(ymax-ymin)/4 ytipos= numpy.arange(ystep,4*ystep,ystep) if fmx == True : ymax = ymaxf ytipos = ytiposf pylab.text(xmin+0.1*(xmax-xmin),0.7*ymax,hname) pylab.axis([xmin,xmax,ymin,ymax],linewidth=0.1) #if numpy.mod(nsub,2)==1 : if numpy.mod(nsub,2)<2 : pylab.yticks(ytipos,weight='light',fontsize=9) if nsub >= 5: pylab.xticks(xtipos,weight='light',fontsize=9) xtiblank=(((''),)*len(xtipos)) ytiblank=(((''),)*len(ytipos)) #if numpy.mod(nsub,2)==0 : # pylab.yticks(ytipos,ytiblank) if nsub < 5: pylab.xticks(xtipos,xtiblank) return
def plot_embedding(X, title=None): x_min, x_max = np.min(X, 0), np.max(X, 0) X = (X - x_min) / (x_max - x_min) pl.figure() ax = pl.subplot(111) for i in range(digits.data.shape[0]): pl.text( X[i, 0], X[i, 1], str(digits.target[i]), color=pl.cm.Set1(digits.target[i] / 10.0), fontdict={"weight": "bold", "size": 9}, ) if hasattr(offsetbox, "AnnotationBbox"): # only print thumbnails with matplotlib > 1.0 shown_images = np.array([[1.0, 1.0]]) # just something big for i in range(digits.data.shape[0]): dist = np.sum((X[i] - shown_images) ** 2, 1) if np.min(dist) < 4e-3: # don't show points that are too close continue shown_images = np.r_[shown_images, [X[i]]] imagebox = offsetbox.AnnotationBbox(offsetbox.OffsetImage(digits.images[i], cmap=pl.cm.gray_r), X[i]) ax.add_artist(imagebox) pl.xticks([]), pl.yticks([]) if title is not None: pl.title(title)
def plotAllWarmJumps(): jumpAddrs = np.array(getAllWarmJumpsAddr()).reshape((8, 18)) figure() pcolor(jumpAddrs) for (x, y), v in np.ndenumerate(jumpAddrs): text(y + 0.125, x + 0.5, "0x%03x" % v) show()
def plot_cfl_results(results): cfl = np.array([r.CFL_NUMBER for r in results.ANALYSES.values()]) its = np.array([r.ITERATIONS for r in results.ANALYSES.values()]) evl = np.arange(len(cfl)) i = np.argsort(cfl) cfl = cfl[i] its = its[i] evl = evl[i] its[its > 1000] = np.nan plt.figure(1) plt.clf() plt.plot(cfl, its, "bo-", lw=2, ms=10) plt.xlabel("CFL Number") plt.ylabel("Iterations") for e, x, y in zip(evl, cfl, its): plt.text(x, y, "%i" % (e + 1)) plt.savefig("cfl_history.eps") plt.close()
def cmap_plot(cmdLine): pylab.figure(figsize=[5,10]) a=outer(ones(10),arange(0,1,0.01)) subplots_adjust(top=0.99,bottom=0.00,left=0.01,right=0.8) maps=[m for m in cm.datad if not m.endswith("_r")] maps.sort() l=len(maps)+1 for i, m in enumerate(maps): print m subplot(l,1,i+1) pylab.setp(pylab.gca(),xticklabels=[],xticks=[],yticklabels=[],yticks=[]) imshow(a,aspect='auto',cmap=get_cmap(m),origin="lower") pylab.text(100.85,0.5,m,fontsize=10) # render plot if cmdLine: pylab.show(block=True) else: pylab.ion() pylab.plot([]) pylab.ioff() status = 1 return status
def plot_density(self, plot_filename="out/density.png"): x, y, labels = self.load_data() figure(figsize=(self.fig_width, self.fig_height), dpi=80) # Perform a kernel density estimator on the coords in data. # The following 10 lines can be commented out if density map not needed. space_factor = 1.2 xmin = space_factor * x.min() xmax = space_factor * x.max() ymin = space_factor * y.min() ymax = space_factor * y.max() X, Y = mgrid[xmin:xmax:100j, ymin:ymax:100j] positions = c_[X.ravel(), Y.ravel()] values = c_[x, y] kernel = stats.kde.gaussian_kde(values.T) Z = reshape(kernel(positions.T).T, X.T.shape) imshow(rot90(Z), cmap=cm.gist_earth_r, extent=[xmin, xmax, ymin, ymax]) # Plot the labels num_labels_to_plot = min([len(labels), self.max_labels, len(x), len(y)]) if self.has_labels: for i in range(num_labels_to_plot): text(x[i], y[i], labels[i]) # assumes m size and order matches labels else: plot(x, y, "k.", markersize=1) axis("equal") axis("off") savefig(plot_filename) print "wrote %s" % (plot_filename)
def plotB3reg(): w=loadStanFit('revE2B3BHreg.fit') printCI(w,'mmu') printCI(w,'mr') for b in range(2): subplot(1,2,b+1) plt.title('') px=np.array(np.linspace(-0.5,0.5,101),ndmin=2) a0=np.array(w['mmu'][:,b],ndmin=2).T a1=np.array(w['mr'][:,b],ndmin=2).T y=np.concatenate([sap(a0+a1*px,97.5,axis=0),sap(a0+a1*px[:,::-1],2.5,axis=0)]) x=np.squeeze(np.concatenate([px,px[:,::-1]],axis=1)) plt.plot(px[0,:],np.median(a0)+np.median(a1)*px[0,:],'red') #plt.plot([-1,1],[0.5,0.5],'grey') ax=plt.gca() ax.set_aspect(1) ax.add_patch(plt.Polygon(np.array([x,y]).T,alpha=0.2,fill=True,fc='red',ec='w')) y=np.concatenate([sap(a0+a1*px,75,axis=0),sap(a0+a1*px[:,::-1],25,axis=0)]) ax.add_patch(plt.Polygon(np.array([x,y]).T,alpha=0.2,fill=True,fc='red',ec='w')) man=np.array([-0.4,-0.2,0,0.2,0.4]) mus=[] for m in range(len(man)): mus.append(loadStanFit('revE2B3BH%d.fit'%m)['mmu'][:,b]) mus=np.array(mus).T errorbar(mus,x=man) ax.set_xticks(man) plt.xlim([-0.5,0.5]) plt.ylim([-0.4,0.8]) #plt.xlabel('Manipulated Displacement') if b==0: plt.ylabel('Perceived Displacemet') plt.gca().set_yticklabels([]) subplot_annotate() plt.text(-1.1,-0.6,'Pivot Displacement',fontsize=8);
def plot_one_gene(self,gene , y_value=1, buffer=0.4): #ax: pylab.axis obj. """ 2008-09-29: Code largely borrowed from Yu Huang.. """ y_value = buffer+y_value/2.0 pylab.text(gene.startPos, -y_value+0.08, gene.tairID, size=8) pylab.plot([gene.startPos,gene.endPos],[-y_value,-y_value],"k",linewidth=2)
def _draw_V(self): """ draw the V-cycle on our optional visualization """ xdown = numpy.linspace(0.0, 0.5, self.nlevels) xup = numpy.linspace(0.5, 1.0, self.nlevels) ydown = numpy.linspace(1.0, 0.0, self.nlevels) yup = numpy.linspace(0.0, 1.0, self.nlevels) pylab.plot(xdown, ydown, lw=2, color="k") pylab.plot(xup, yup, lw=2, color="k") pylab.scatter(xdown, ydown, marker="o", color="k", s=40) pylab.scatter(xup, yup, marker="o", color="k", s=40) if self.up_or_down == "down": pylab.scatter( xdown[self.nlevels - self.current_level - 1], ydown[self.nlevels - self.current_level - 1], marker="o", color="r", zorder=100, s=38, ) else: pylab.scatter(xup[self.current_level], yup[self.current_level], marker="o", color="r", zorder=100, s=38) pylab.text(0.7, 0.1, "V-cycle %d" % (self.current_cycle)) pylab.axis("off")
def add_bar(self, fname, cname, model): chiT, dof = 0, 0 fname = cdir+fname pnames = open(fname+'.paramnames').readlines() if 'Neff' in fname: loglsx = open(fname+'.maxlike').readlines() logls2 = loglsx else: for i in range(3): loglsx = open(fname+'_'+str(i+1)+'.maxlike').readlines() if (i == 0): logls2 = loglsx else: if float(loglsx[0].split()[1]) < float(logls2[0].split()[1]): logls2 = loglsx logls = logls2[0].split(' ')[2:] chi2d = {} print(' ') print('++++' + model) for pname, logl in zip(pnames, logls): if "_like" in pname: ppname = pname.split(' ') if 'SPlanck' in ppname[0]: chi2 = 0 else: if 'Neff' in fname: chi2 = float(logl) if 'Betoule' in ppname[0]: chi2 = chi2 - 692 else: chi2 = -2*float(logl) if 'Betoule' in ppname[0]: chi2 = chi2 - 30 chiT += chi2 xname = ppname[0].replace('_like', '') print(xname, chi2) chi2d[xname] = chi2 print('Min_chi2 = ', chiT+30) param = mdof[model] for xname in nlist: dof += defdof[xname] left = 0 for xname in nlist: chi2 = chi2d[xname] color = colors[xname] PP = pylab.barh(self.cy-0.25, chi2, left=left, height=0.5, color=color, linewidth=0) self.patches[xname] = PP[0] left += chi2 if "SN" in dataset: pylab.text(position, self.cy-0.25, r' %.2f / %s' % (chiT + 30, dof-param + 30), fontsize=15) else: pylab.text(position, self.cy-0.25, r' %.2f / %s' % (chiT, dof-param), fontsize=15) self.ys.append(self.cy) self.cy += 1 self.names.append(cname)
def plot_prob_effector(sens, fpr, xmax=1, baserate=0.1): """Plots a line graph of P(effector|positive test) against the baserate of effectors in the input set to the classifier. The baserate argument draws an annotation arrow indicating P(pos|+ve) at that baserate """ assert 0.1 <= xmax <= 1, "Max x axis value must be in range [0,1]" assert 0.01 <= baserate <= 1, "Baserate annotation must be in range [0,1]" baserates = pylab.arange(0, 1.05, xmax * 0.005) probs = [p_correct_given_pos(sens, fpr, b) for b in baserates] pylab.plot(baserates, probs, 'r') pylab.title("P(eff|pos) vs baserate; sens: %.2f, fpr: %.2f" % (sens, fpr)) pylab.ylabel("P(effector|positive)") pylab.xlabel("effector baserate") pylab.xlim(0, xmax) pylab.ylim(0, 1) # Add annotation arrow xpos, ypos = (baserate, p_correct_given_pos(sens, fpr, baserate)) if baserate < xmax: if xpos > 0.7 * xmax: xtextpos = 0.05 * xmax else: xtextpos = xpos + (xmax-xpos)/5. if ypos > 0.5: ytextpos = ypos - 0.05 else: ytextpos = ypos + 0.05 pylab.annotate('baserate: %.2f, P(pos|+ve): %.3f' % (xpos, ypos), xy=(xpos, ypos), xytext=(xtextpos, ytextpos), arrowprops=dict(facecolor='black', shrink=0.05)) else: pylab.text(0.05 * xmax, 0.95, 'baserate: %.2f, P(pos|+ve): %.3f' % \ (xpos, ypos))
def plot_different_versions(repo_name, pre_fetch_size, distance_to_fetch, fig_name=None): """Sample plot of several different repos.""" x = range(100) legend = [] fig, ax = plt.subplots() for version in version_color: csv_reader = _file_to_csv( version=version, repo_name=repo_name, pre_fetch_size=pre_fetch_size, distance_to_fetch=distance_to_fetch) y = get_column(csv_reader, 'hit_rate') if y is not None: plt.plot(x, y, color=version_color[version]) line = mlines.Line2D( [], [], label=version, color=version_color[version], linewidth=2) legend.append(line) plt.title('Different version outputs of %s.git' % (repo_name,), y=1.02) plt.legend(handles=legend, loc=4) plt.ylabel('hit rate') plt.xlabel('cache size (%)') legend_text = 'pfs = %s, dtf = %s\ncommit_num = %s' % ( pre_fetch_size, distance_to_fetch, REPO_DATA[repo_name]['commit_num']) text(0.55, 0.07, legend_text, ha='center', va='center', transform=ax.transAxes, multialignment='left', bbox=dict(alpha=1.0, boxstyle='square', facecolor='white')) plt.grid(True) plt.autoscale(False) plt.show()
def plot_frontier(self,frontier_only=False,plot_samples=True) : """ Plot the frontier""" frontier = self.frontier frontier_energy = self.frontier_energy feat1,feat2 = self.feats pl.figure() if not frontier_only : ll_list1,ll_list2 = zip(*self.all_seq_energy) pl.plot(ll_list1,ll_list2,'b*') if plot_samples : ll_list1,ll_list2 = zip(*self.sample_seq_energy) pl.plot(ll_list1,ll_list2,'g*') pl.plot(*zip(*sorted(frontier_energy)),color='magenta',\ marker='*', linestyle='dashed') ctr = dict(zip(set(frontier_energy),[0]* len(set(frontier_energy)))) for i,e in enumerate(frontier_energy) : ctr[e] += 1 pl.text(e[0],e[1]+0.1*ctr[e],str(i),fontsize=10) pl.text(e[0]+0.4,e[1]+0.1*ctr[e],frontier[i],fontsize=9) pl.xlabel('Energy:'+feat1) pl.ylabel('Energy:'+feat2) pl.title('Energy Plot') xmin,xmax = pl.xlim() ymin,ymax = pl.ylim() pl.xlim(xmin,xmax) pl.ylim(ymin,ymax) pic_dir = '../docs/tex/pics/' pl.savefig(pic_dir+self.name+'.pdf') pl.savefig(pic_dir+self.name+'.png')
def plotDirections(aabb=(),mask=0,bins=20,numHist=True,noShow=False,sphSph=False): """Plot 3 histograms for distribution of interaction directions, in yz,xz and xy planes and (optional but default) histogram of number of interactions per body. If sphSph only sphere-sphere interactions are considered for the 3 directions histograms. :returns: If *noShow* is ``False``, displays the figure and returns nothing. If *noShow*, the figure object is returned without being displayed (works the same way as :yref:`yade.plot.plot`). """ import pylab,math from yade import utils for axis in [0,1,2]: d=utils.interactionAnglesHistogram(axis,mask=mask,bins=bins,aabb=aabb,sphSph=sphSph) fc=[0,0,0]; fc[axis]=1. subp=pylab.subplot(220+axis+1,polar=True); # 1.1 makes small gaps between values (but the column is a bit decentered) pylab.bar(d[0],d[1],width=math.pi/(1.1*bins),fc=fc,alpha=.7,label=['yz','xz','xy'][axis]) #pylab.title(['yz','xz','xy'][axis]+' plane') pylab.text(.5,.25,['yz','xz','xy'][axis],horizontalalignment='center',verticalalignment='center',transform=subp.transAxes,fontsize='xx-large') if numHist: pylab.subplot(224,polar=False) nums,counts=utils.bodyNumInteractionsHistogram(aabb if len(aabb)>0 else utils.aabbExtrema()) avg=sum([nums[i]*counts[i] for i in range(len(nums))])/(1.*sum(counts)) pylab.bar(nums,counts,fc=[1,1,0],alpha=.7,align='center') pylab.xlabel('Interactions per body (avg. %g)'%avg) pylab.axvline(x=avg,linewidth=3,color='r') pylab.ylabel('Body count') if noShow: return pylab.gcf() else: pylab.ion() pylab.show()
def plot_genome(out_file, data_file, samples, dpi=300, screen=False): if screen: PL.rcParams.update(PLOT_PARAMS_SCREEN) LOG.info("plot_genome - out_file=%s, data_file=%s, samples=%s, dpi=%d"%(out_file, data_file, str(samples), dpi)) colors = 'bgryckbgryck' data = read_posterior(data_file) if samples is None or len(samples) == 0: samples = data.keys() if len(samples) == 0: return PL.figure(None, [14, 4]) right_end = 0 # rightmost plotted base pair for chrm in sort_chrms(data.values()[0]): # for chromosomes in ascending order max_site = max(data[samples[0]][chrm]['L']) # length of chromosome for s, sample in enumerate(samples): # plot all samples I = SP.where(SP.array(data[sample][chrm]['SD']) < 0.3)[0] # at sites that have confident posteriors PL.plot(SP.array(data[sample][chrm]['L'])[I] + right_end, SP.array(data[sample][chrm]['AF'])[I], alpha=0.4, color=colors[s], lw=2) # offset by the end of last chromosome if right_end > 0: PL.plot([right_end, right_end], [0,1], 'k--', lw=0.4, alpha=0.2) # plot separators between chromosomes new_right = right_end + max(data[sample][chrm]['L']) PL.text(right_end + 0.5*(new_right - right_end), 0.9, str(chrm), horizontalalignment='center') right_end = new_right # update rightmost end PL.plot([0,right_end], [0.5,0.5], 'k--', alpha=0.3) PL.xlim(0,right_end) xrange = SP.arange(0,right_end, 1000000) PL.xticks(xrange, ["%d"%(int(x/1000000)) for x in xrange]) PL.xlabel("Genome (Mb)"), PL.ylabel("Reference allele frequency") PL.savefig(out_file, dpi=dpi)
def labelPlot(numFlips, numTrials, mean, sd): pylab.title(str(numTrials) + ' trials of ' + str(numFlips) + ' flips each') pylab.xlabel('Fraction of Heads') pylab.ylabel('Number of Trials') xmin, xmax = pylab.xlim() ymin, ymax = pylab.ylim() pylab.text(xmin + (xmax - xmin) * 0.02, (ymax - ymin) / 2, 'Mean = ' + str(round(mean, 4)) + '\nSD = ' + str(round(sd, 4)))
def bondlengths(Ea, dE): """Calculate bond lengths and write to bondlengths.csv file""" B = [] E0 = [] csv = open('bondlengths.csv', 'w') for formula, energies in dE: bref = diatomic[formula][1] b = np.linspace(0.96 * bref, 1.04 * bref, 5) e = np.polyfit(b, energies, 3) if not formula in Ea: continue ea, eavasp = Ea[formula] dedb = np.polyder(e, 1) b0 = np.roots(dedb)[1] assert abs(b0 - bref) < 0.1 b = np.linspace(0.96 * bref, 1.04 * bref, 20) e = np.polyval(e, b) - ea if formula == 'O2': plt.plot(b, e, '-', color='0.7', label='GPAW') else: plt.plot(b, e, '-', color='0.7', label='_nolegend_') name = latex(data[formula]['name']) plt.text(b[0], e[0] + 0.2, name) B.append(bref) E0.append(-eavasp) csv.write('`%s`, %.3f, %.3f, %+.3f\n' % (name[1:-1], b0, bref, b0 - bref)) plt.plot(B, E0, 'g.', label='reference') plt.legend(loc='lower right') plt.xlabel('Bond length $\mathrm{\AA}$') plt.ylabel('Energy [eV]') plt.savefig('bondlengths.png')
def bar_plot_1(data): """Generates bar plot from data""" x_labels=[a for (a,b) in data] y_data=[b for (a,b) in data] # Create chart pos=range(1,len(x_labels)+1) P.figure(1,figsize=(11,7)) P.bar(left=pos,height=y_data,log=True,width=.6,color="lightgrey",edgecolor="#8094B6") pos2=[a+.3 for a in pos] P.xticks(pos2,x_labels) P.title("Evolution of network data size over time",fontsize="x-large") P.xlabel("Network data sets (year published)",fontsize="large") P.ylabel("Number of vertices [log(N)]",fontsize="large") text_color="black" for i in range(len(y_data)): if i<2: P.text(pos[i]+0.01,y_data[i]+5,int_to_scinot(y_data[i]),color=text_color) elif i==2: P.text(pos[i]+0.01,y_data[i]+100,int_to_scinot(y_data[i]),color=text_color) elif i==3: P.text(pos[i]+0.01,y_data[i]+1000,int_to_scinot(y_data[i]),color=text_color) elif i==4: P.text(pos[i]+0.01,y_data[i]+100000,int_to_scinot(y_data[i]),color=text_color) else: P.text(pos[i]+0.01,y_data[i]+1000000,int_to_scinot(y_data[i]),color=text_color) P.savefig("../../images/figures/net_size_evo.png",dpi=100,format="png")
def gui_repr(self): """Generate a GUI to represent the sentence alignments """ if __pylab_loaded__: fig_width = max(len(self.text_e), len(self.text_f)) + 1 fig_height = 3 pylab.figure(figsize=(fig_width*0.8, fig_height*0.8), facecolor='w') pylab.box(on=False) pylab.subplots_adjust(left=0, right=1, bottom=0, top=1) pylab.xlim(-1, fig_width - 1) pylab.ylim(0, fig_height) pylab.xticks([]) pylab.yticks([]) e = [0 for _ in xrange(len(self.text_e))] f = [0 for _ in xrange(len(self.text_f))] for (i, j) in self.align: e[i] = 1 f[j] = 1 # draw the middle line pylab.arrow(i, 2, j - i, -1, color='r') for i in xrange(len(e)): # draw e side line pylab.text(i, 2.5, self.text_e[i], ha = 'center', va = 'center', rotation=30) if e[i] == 1: pylab.arrow(i, 2.5, 0, -0.5, color='r', alpha=0.3, lw=2) for i in xrange(len(f)): # draw f side line pylab.text(i, 0.5, self.text_f[i], ha = 'center', va = 'center', rotation=30) if f[i] == 1: pylab.arrow(i, 0.5, 0, 0.5, color='r', alpha=0.3, lw=2) pylab.draw()
def generateNetwork(aINDEX, dDIFF, AM, name): R = 100 X = [] Y = [] Z = [] for i in range(len(aINDEX)): #r = 100*R + sum(AM[i])*R r = 1000*R + 10000*abs(dDIFF[aINDEX[i]])*R t = 2*math.pi*random.random() X.append(r*math.cos(t)) Y.append(r*math.sin(t)) Z.append(sum(AM[i])) fig = P.figure() P.axis('off') P.scatter(X, Y, Z, edgecolor = '', c = 'lightblue', alpha = 0.5) for i in range(len(aINDEX)): for j in range(i): if sum(AM[i]) > 200 and sum(AM[j]) > 200: P.plot([X[i], X[j]], [Y[i], Y[j]],'k',lw = 0.01) for i in range(len(aINDEX)): if sum(AM[i])>200: P.text(X[i], Y[i], aINDEX[i], fontsize = 8) #P.show() fig.savefig('figures/' + name + '.png') return
def doSubplot(multiplier=1.0, layout=(-1, -1)): if time_sec < 16200: xs, ys = xs_1, ys_1 domain_bounds = bounds_1sthalf grid = grid_1 else: xs, ys = xs_2, ys_2 domain_bounds = bounds_2ndhalf grid = grid_2 try: mo = ARPSModelObsFile("%s/%s/KCYS%03dan%06d" % (base_path, exp, min_ens, time_sec)) except AssertionError: mo = ARPSModelObsFile("%s/%s/KCYS%03dan%06d" % (base_path, exp, min_ens, time_sec), mpi_config=(2, 12)) except: print "Can't load reflectivity ..." mo = {'Z':np.zeros((1, 255, 255), dtype=np.float32)} pylab.contour(xs, ys, wind[exp]['w'][wdt][domain_bounds], levels=np.arange(2, 102, 2), styles='-', colors='k') pylab.contour(xs, ys, wind[exp]['w'][wdt][domain_bounds], levels=np.arange(-100, 0, 2), styles='--', colors='k') pylab.quiver(xs[thin], ys[thin], wind[exp]['u'][wdt][domain_bounds][thin], wind[exp]['v'][wdt][domain_bounds][thin]) pylab.contourf(xs, ys, mo['Z'][0][domain_bounds], levels=np.arange(10, 85, 5), cmap=NWSRef, zorder=-10) grid.drawPolitical(scale_len=10) row, col = layout if col == 1: pylab.text(-0.075, 0.5, exp_names[exp], transform=pylab.gca().transAxes, rotation=90, ha='center', va='center', size=12 * multiplier)
def scatter_from_csv(self, filename, sand = 'sand', silt = 'silt', clay = 'clay', diameter = '', hue = '', tags = '', **kwargs): """Loads data from filename (expects csv format). Needs one header row with at least the columns {sand, silt, clay}. Can also plot two more variables for each point; specify the header value for columns to be plotted as diameter, hue. Can also add a text tag offset from each point; specify the header value for those tags. Note! text values (header entries, tag values ) need to be quoted to be recognized as text. """ fh = file(filename, 'rU') soilrec = csv2rec(fh) count = 0 if (sand in soilrec.dtype.names): count = count + 1 if (silt in soilrec.dtype.names): count = count + 1 if (clay in soilrec.dtype.names): count = count + 1 if (count < 3): print "ERROR: need columns for sand, silt and clay identified in ', filename" locargs = {'s': None, 'c': None} for (col, key) in ((diameter, 's'), (hue, 'c')): col = col.lower() if (col != '') and (col in soilrec.dtype.names): locargs[key] = soilrec.field(col) else: print 'ERROR: did not find ', col, 'in ', filename for k in kwargs: locargs[k] = kwargs[k] values = zip(*[soilrec.field(sand), soilrec.field(clay), soilrec.field(silt)]) print values (xs, ys) = self._toCart(values) p.scatter(xs, ys, label='_', **locargs) if (tags != ''): tags = tags.lower() for (x, y, tag) in zip(*[xs, ys, soilrec.field(tags)]): print x, print y, print tag p.text(x + 1, y + 1, tag, fontsize=12) fh.close()
def print_matplotlib(s): pylab.figure() pylab.text(0,0,s) pylab.axis('off') pylab.figure() #pylab.show() return
def scatter_stats(db, s1, s2, f1=None, f2=None, **kwargs): if f1 == None: f1 = lambda x: x # constant function if f2 == None: f2 = f1 x = [] xerr = [] y = [] yerr = [] for k in db: x_k = [f1(x_ki) for x_ki in db[k].__getattribute__(s1).gettrace()] y_k = [f2(y_ki) for y_ki in db[k].__getattribute__(s2).gettrace()] x.append(pl.mean(x_k)) xerr.append(pl.std(x_k)) y.append(pl.mean(y_k)) yerr.append(pl.std(y_k)) pl.text(x[-1], y[-1], " %s" % k, fontsize=8, alpha=0.4, zorder=-1) default_args = {"fmt": "o", "ms": 10} default_args.update(kwargs) pl.errorbar(x, y, xerr=xerr, yerr=yerr, **default_args) pl.xlabel(s1) pl.ylabel(s2)
def compare_models(db, stoch="itn coverage", stat_func=None, plot_type="", **kwargs): if stat_func == None: stat_func = lambda x: x X = {} for k in sorted(db.keys()): c = k.split("_")[2] X[c] = [] for k in sorted(db.keys()): c = k.split("_")[2] X[c].append([stat_func(x_ki) for x_ki in db[k].__getattribute__(stoch).gettrace()]) x = pl.array([pl.mean(xc[0]) for xc in X.values()]) xerr = pl.array([pl.std(xc[0]) for xc in X.values()]) y = pl.array([pl.mean(xc[1]) for xc in X.values()]) yerr = pl.array([pl.std(xc[1]) for xc in X.values()]) if plot_type == "scatter": default_args = {"fmt": "o", "ms": 10} default_args.update(kwargs) for c in X.keys(): pl.text(pl.mean(X[c][0]), pl.mean(X[c][1]), " %s" % c, fontsize=8, alpha=0.4, zorder=-1) pl.errorbar(x, y, xerr=xerr, yerr=yerr, **default_args) pl.xlabel("First Model") pl.ylabel("Second Model") pl.plot([0, 1], [0, 1], alpha=0.5, linestyle="--", color="k", linewidth=2) elif plot_type == "rel_diff": d1 = sorted(100 * (x - y) / x) d2 = sorted(100 * (xerr - yerr) / xerr) pl.subplot(2, 1, 1) pl.title("Percent Model 2 deviates from Model 1") pl.plot(d1, "o") pl.xlabel("Countries sorted by deviation in mean") pl.ylabel("deviation in mean (%)") pl.subplot(2, 1, 2) pl.plot(d2, "o") pl.xlabel("Countries sorted by deviation in std err") pl.ylabel("deviation in std err (%)") elif plot_type == "abs_diff": d1 = sorted(x - y) d2 = sorted(xerr - yerr) pl.subplot(2, 1, 1) pl.title("Percent Model 2 deviates from Model 1") pl.plot(d1, "o") pl.xlabel("Countries sorted by deviation in mean") pl.ylabel("deviation in mean") pl.subplot(2, 1, 2) pl.plot(d2, "o") pl.xlabel("Countries sorted by deviation in std err") pl.ylabel("deviation in std err") else: assert 0, "plot_type must be abs_diff, rel_diff, or scatter" return pl.array([x, y, xerr, yerr])
def PASTISConfMap(confmatrix): norms = np.sum(confmatrix,axis=1) for i in range(len(confmatrix[:,0])): confmatrix[i,:] /= norms[i] p.figure(12) p.clf() p.imshow(confmatrix,interpolation='nearest',origin='lower',cmap='YlOrRd') #box labels for x in range(len(confmatrix[:,0])): for y in range(len(confmatrix[:,0])): if confmatrix[y,x] > 0.05: if confmatrix[y,x]>0.7: p.text(x,y,str(np.round(confmatrix[y,x],decimals=3)),va='center',ha='center',color='w') else: p.text(x,y,str(np.round(confmatrix[y,x],decimals=3)),va='center',ha='center') #plot grid lines (using p.grid leads to unwanted offset) for x in [0.5,1.5,2.5,3.5,4.5]: p.plot([x,x],[-0.5,6.5],'k--') for y in [0.5,1.5,2.5,3.5,4.5]: p.plot([-0.5,6.5],[y,y],'k--') p.xlim(-0.5,5.5) p.ylim(-0.5,5.5) p.xlabel('Predicted Class') p.ylabel('True Class') #class labels p.xticks([0,1,2,3,4,5],['Planet', 'EB', 'ET', 'PSB','BEB', 'BTP'],rotation='vertical') p.yticks([0,1,2,3,4,5],['Planet', 'EB', 'ET', 'PSB','BEB', 'BTP'])
def suplabel(axis, label, label_prop=None, labelpad=3, ha='center', va='center'): """ Add super ylabel or xlabel to the figure Similar to matplotlib.suptitle axis - string: "x" or "y" label - string label_prop - keyword dictionary for Text labelpad - padding from the axis (default: 5) ha - horizontal alignment (default: "center") va - vertical alignment (default: "center") """ fig = pylab.gcf() xmin = [] ymin = [] for ax in fig.axes: xmin.append(ax.get_position().xmin) ymin.append(ax.get_position().ymin) xmin, ymin = min(xmin), min(ymin) dpi = fig.dpi if axis.lower() == "y": rotation = 90. x = xmin-float(labelpad)/dpi y = 0.5 elif axis.lower() == 'x': rotation = 0. x = 0.5 y = ymin - float(labelpad)/dpi else: raise Exception("Unexpected axis: x or y") if label_prop is None: label_prop = dict() pylab.text(x, y, label, rotation=rotation, transform=fig.transFigure, ha=ha, va=va, **label_prop)
def _show_rates(rate, wo, wt, attenuator, tau_NP, tau_P): import pylab #pylab.figure() pylab.errorbar(rate, wt[0], yerr=wt[1], fmt='g.', label='attenuated') pylab.errorbar(rate, wo[0], yerr=wo[1], fmt='b.', label='unattenuated') pylab.xscale('log') pylab.yscale('log') pylab.xlabel('incident rate (counts/second)') pylab.ylabel('observed rate (counts/second)') pylab.legend(loc='best') pylab.grid(True) pylab.plot(rate, rate/attenuator, 'g-', label='target') pylab.plot(rate, rate, 'b-', label='target') Ipeak, Rpeak = peak_rate(tau_NP=tau_NP, tau_P=tau_P) if rate[0] <= Ipeak <= rate[-1]: pylab.axvline(x=Ipeak, ls='--', c='b') pylab.text(x=Ipeak, y=0.05, s=' %g'%Ipeak, ha='left', va='bottom', transform=pylab.gca().get_xaxis_transform()) if False: pylab.axhline(y=Rpeak, ls='--', c='b') pylab.text(y=Rpeak, x=0.05, s=' %g\n'%Rpeak, ha='left', va='bottom', transform=pylab.gca().get_yaxis_transform())
def plotVowelProportionHistogram(wordList, numBins=15): """ Plots a histogram of the proportion of vowels in each word in wordList using the specified number of bins in numBins """ vowels = 'aeiou' vowelProportions = [] for word in wordList: vowelsCount = 0.0 for letter in word: if letter in vowels: vowelsCount += 1 vowelProportions.append(vowelsCount / len(word)) meanProportions = sum(vowelProportions) / len(vowelProportions) print "Mean proportions: ", meanProportions pylab.figure(1) pylab.hist(vowelProportions, bins=15) pylab.title("Histogram of Proportions of Vowels in Each Word") pylab.ylabel("Count of Words in Each Bucket") pylab.xlabel("Proportions of Vowels in Each Word") ymin, ymax = pylab.ylim() ymid = (ymax - ymin) / 2 pylab.text(0.03, ymid, "Mean = {0}".format( str(round(meanProportions, 4)))) pylab.vlines(0.5, 0, ymax) pylab.text(0.51, ymax - 0.01 * ymax, "0.5", verticalalignment = 'top') pylab.show()
def _plot(self, attribute_name, file_path=None): clf() # Clear existing plot a = self._values[0] * 100 b = self._values[1] * 100 ax = subplot(111) plot(a, a, "k--", a, b, "r") ax.set_ylim([0, 100]) ax.grid(color="0.5", linestyle=":", linewidth=0.5) xlabel("population") ylabel(attribute_name) title("Lorenz curve") font = {"fontname": "Courier", "color": "r", "fontweight": "bold", "fontsize": 11} box = {"pad": 6, "facecolor": "w", "linewidth": 1, "fill": True} text(5, 90, "Gini coefficient: %(gini)f" % {"gini": self._ginicoeff}, font, color="k", bbox=box) majorLocator = MultipleLocator(20) majorFormatter = FormatStrFormatter("%d %%") minorLocator = MultipleLocator(5) ax.xaxis.set_major_locator(majorLocator) ax.xaxis.set_major_formatter(majorFormatter) ax.xaxis.set_minor_locator(minorLocator) ax.yaxis.set_major_locator(majorLocator) ax.yaxis.set_major_formatter(majorFormatter) ax.yaxis.set_minor_locator(minorLocator) if file_path: savefig(file_path) close() else: show()
def plotMatches(self, loader, timespread=14.0): """This function plots the matched events across all the loggers, so the quality of matching can be visually inspected. Supply a loader function which then loads up the correct audio, given a logger index and timestamp. Optional timespread is the time around the central event""" # sort out our plotting context (bounds) fmin = 0 fmax = 3000 fsize = 16 # label font size # first load up all the audio files and plot spectrograms pylab.figure(figsize=(16, 12)) numplots = len(self.matchedEvents) axislist = [] plotindex = 1 # This is the start time of the event centre_time = self.referenceEvent.event_time for event in self.matchedEvents: stream = loader(event[0].logger, event[0].coarse_timestamp, centre_time, timespread) ax = pylab.subplot(numplots, 1, plotindex) axislist.append(ax) pylab.specgram(stream, Fs=44100, NFFT=4096, noverlap=3000) # relabel the x axis pylab.xticks(range(0, int(timespread), 2), [ "{:0.2f}".format(float(l) + centre_time - timespread / 2) for l in range(0, int(timespread), 2) ]) pylab.ylim(fmin, fmax) pylab.grid(True) # Add annotation to each panel pylab.text((timespread) * 0.02, fmax * 0.8, "Logger:" + str(event[0].logger.logger_id), fontsize=fsize, style='normal') # add box to each panel delta_time = event[0].event_time - centre_time + timespread / 2 if event[ 0].logger.logger_id == self.referenceEvent.logger.logger_id: rect_color = 'red' else: rect_color = 'black' ax.add_patch( patches.Rectangle( ((delta_time, (fmax - fmin) * 0.01)), self.referenceEvent.event_length, (fmax - fmin) * 0.98, edgecolor=rect_color, fill=False # remove background )) str_time = "t={:.2f}s SS={:.2e}".format(event[0].event_time, event[0].SS) ax.text((timespread) * 0.11, fmax * 0.8, str_time, fontsize=fsize, style='normal') plotindex += 1 if self.referenceEvent.classLabel is not None: pylab.suptitle(self.referenceEvent.classLabel) pylab.tight_layout()
print('=' * 80) print("LinearSVC with L1-based feature selection") results.append(benchmark(L1LinearSVC())) # make some plots indices = np.arange(len(results)) results = [[x[i] for x in results] for i in range(4)] clf_names, score, training_time, test_time = results training_time = np.array(training_time) / np.max(training_time) test_time = np.array(test_time) / np.max(test_time) pl.figure(figsize=(12, 8)) pl.title("Score") pl.barh(indices, score, .2, label="score", color='r') pl.barh(indices + .3, training_time, .2, label="training time", color='g') pl.barh(indices + .6, test_time, .2, label="test time", color='b') pl.yticks(()) pl.legend(loc='best') pl.subplots_adjust(left=.25) pl.subplots_adjust(top=.95) pl.subplots_adjust(bottom=.05) for i, c in zip(indices, clf_names): pl.text(-.3, i, c) pl.show()
# solve and return W, H, x, y, w, h sol = solvers.cpl(c, F, G, h) return sol['x'][0], sol['x'][1], sol['x'][2:7], sol['x'][7:12], \ sol['x'][12:17], sol['x'][17:] solvers.options['show_progress'] = False pylab.figure(facecolor='w') pylab.subplot(221) Amin = matrix([100., 100., 100., 100., 100.]) W, H, x, y, w, h = floorplan(Amin) for k in xrange(5): pylab.fill([x[k], x[k], x[k]+w[k], x[k]+w[k]], [y[k], y[k]+h[k], y[k]+h[k], y[k]], facecolor = '#D0D0D0') pylab.text(x[k]+.5*w[k], y[k]+.5*h[k], "%d" %(k+1)) pylab.axis([-1.0, 26, -1.0, 26]) pylab.xticks([]) pylab.yticks([]) pylab.subplot(222) Amin = matrix([20., 50., 80., 150., 200.]) W, H, x, y, w, h = floorplan(Amin) for k in xrange(5): pylab.fill([x[k], x[k], x[k]+w[k], x[k]+w[k]], [y[k], y[k]+h[k], y[k]+h[k], y[k]], facecolor = '#D0D0D0') pylab.text(x[k]+.5*w[k], y[k]+.5*h[k], "%d" %(k+1)) pylab.axis([-1.0, 26, -1.0, 26]) pylab.xticks([]) pylab.yticks([])
# cbar_ax.set_ylabel('Synchronization to the 1st REST') # fig.savefig('sync'+desc, dpi=200) # plt.show() # plt.close() fig = plt.figure(figsize=(8, 5)) ax = plt.pcolormesh(t/60, f, np.median(specs, 0), cmap='RdBu_r', vmin=-80, vmax=80) plt.xlabel('Time $t$ [s]') plt.ylabel('Frequency $f$ [Hz]') for j, (x, y) in enumerate(blocks_intervals): plt.text((x+y)/2, 35, blocks_list[j], ha='center', va='bottom', weight='bold') plt.axvline(y, color='k', linestyle='--', alpha=0.6) for (name, band) in bands: plt.text(1 , np.mean(band), name, ha='center', va='center', weight='bold') plt.axhline(band[0], color='k', linestyle='--', alpha=0.6) plt.title('Experimental' if not control else 'Control') plt.ylim(0, 40) plt.xlim(0, 13) fig.subplots_adjust(right=0.8) cbar_ax = fig.add_axes([0.85, 0.10, 0.05, 0.8]) cb = fig.colorbar(ax, cax=cbar_ax) cbar_ax.set_ylabel('$S(t, f)$ [%]')
def _normalized_text(X, Y, rx, ry, msg, **kwargs): pylab.text( min(X) * (1.0 - rx) + max(X) * rx, min(Y) * (1.0 - ry) + max(Y) * ry, msg, **kwargs)
def plot(self, filename=None): """ Plot vrep and derivative together with fit info. parameters: =========== filename: graphics output file name """ try: import pylab as pl except: raise AssertionError('pylab could not be imported') r=np.linspace(0,self.r_cut) v=[self(x,der=0) for x in r] vp=[self(x,der=1) for x in r] rmin=0.95*min([d[0] for d in self.deriv]) rmax = 1.1*self.r_cut fig=pl.figure() pl.subplots_adjust(wspace=0.25) # Vrep pl.subplot(1,2,1) pl.ylabel(r'$V_{rep}(r)$ (eV)') pl.xlabel(r'$r$ ($\AA$)') if self.r_dimer!=None: pl.axvline(x=self.r_dimer,c='r',ls=':') pl.axvline(x=self.r_cut,c='r',ls=':') pl.plot(r,v) pl.ylim(ymin=0,ymax=self(rmin)) pl.xlim(xmin=rmin, xmax=self.r_cut) # Vrep' pl.subplot(1,2,2) pl.ylabel(r'$dV_{rep}(r)/dr$ (eV/$\AA$)') pl.xlabel(r'$r$ ($\AA$)') pl.plot(r,vp,label=r'$dV_{rep}(r)/dr$') for s in self.deriv: pl.scatter( [s[0]],[s[1]],s=100*s[2],c=s[3],label=s[4]) pl.axvline(x=self.r_cut,c='r',ls=':') if self.r_dimer!=None: pl.axvline(x=self.r_dimer,c='r',ls=':') ymin = 0 for point in self.deriv: if rmin<=point[0]<=rmax: ymin = min(ymin,point[1]) ymax = np.abs(ymin)*0.1 pl.axhline(0,ls='--',c='k') if self.r_dimer!=None: pl.text(self.r_dimer, ymax, r'$r_{dimer}$') pl.text(self.r_cut, ymax, r'$r_{cut}$') pl.xlim(xmin=rmin, xmax=rmax) pl.ylim(ymin=ymin, ymax=ymax) #pl.subtitle('Fitting for %s and %s' % (self.sym1, self.sym2)) pl.rc('font', size=10) pl.rc('legend',fontsize=8) pl.legend(loc=4) file = '%s_%s_repulsion.pdf' % (self.sym1, self.sym2) if filename!=None: file=filename pl.savefig(file) pl.clf()
def plot(wavekernel_out, wavekernel_out_path, to_show_msd, highlight_i, energy_min, energy_max, ymin, ymax, is_log, to_label, title, out_filename): condition = wavekernel_out['condition'] eigenvalues = condition['eigenvalues'] eigenstate_msd = condition['eigenstate_mean_z'] eigenstate_msd = map(lambda x: x * kAngstromPerAu, eigenstate_msd) fst_filter = wavekernel_out['setting']['fst_filter'] pylab.figure(figsize=(16, 12)) if is_log: pylab.yscale('log') pylab.plot(eigenvalues, eigenstate_msd, 'o') if not highlight_i is None: j = highlight_i - fst_filter xs = [eigenvalues[j]] ys = [eigenstate_msd[j]] pylab.plot(xs, ys, 'o', color='red', label=str(highlight_i), markersize=10) pylab.legend(numpoints=1) if energy_min is None: energy_min = pylab.xlim()[0] if energy_max is None: energy_max = pylab.xlim()[1] if ymin is None: ymin = pylab.ylim()[0] if ymax is None: ymax = pylab.ylim()[1] pylab.xlim(energy_min, energy_max) pylab.ylim(ymin, ymax) xticks_new = list(pylab.xticks()[0]) xticks_new.extend([energy_min, energy_max]) pylab.xticks(xticks_new) pylab.xlim(energy_min, energy_max) # limit setting again is needed. yticks_new = list(pylab.yticks()[0]) yticks_new.extend([ymin, ymax]) pylab.yticks(yticks_new) pylab.ylim(ymin, ymax) # limit setting again is needed. pylab.rcParams.update({'font.size': 10}) if to_label: j = fst_filter for x, y in zip(eigenvalues, eigenstate_msd): if energy_min <= x <= energy_max and ymin <= y <= ymax: pylab.text(x, y, str(j)) j += 1 # num_points = len(filter( # lambda (x, y): energy_min <= x and x <= energy_max and ymin <= y and y <= ymax, # zip(eigenvalues, eigenstate_msd))) # if title is None: title = wavekernel_out_path pylab.xlabel('Energy [a.u.]') pylab.ylabel('Mean x [$\AA$]') pylab.title(title) if out_filename is None: out_filename = re.sub("\.[^.]+$", "", wavekernel_out_path) + "_eigenvalue_vs_mean.png" pylab.savefig(out_filename)
def segment_and_group_sources(image, T, name=None, ps=None, plots=False): ''' *image*: binary image that defines "blobs" *T*: source table; only ".ibx" and ".iby" elements are used (x,y integer pix pos). Note: ".blob" field is added. *name*: for debugging only Returns: (blobs, blobsrcs, blobslices) *blobs*: image, values -1 = no blob, integer blob indices *blobsrcs*: list of np arrays of integers, elements in T within each blob *blobslices*: list of slice objects for blob bounding-boxes. ''' from scipy.ndimage.morphology import binary_fill_holes from scipy.ndimage.measurements import label, find_objects image = binary_fill_holes(image) blobs, nblobs = label(image) #print('Detected blobs:', nblobs) H, W = image.shape del image blobslices = find_objects(blobs) clipx = np.clip(T.ibx, 0, W - 1) clipy = np.clip(T.iby, 0, H - 1) T.blob = blobs[clipy, clipx] if plots: import pylab as plt from astrometry.util.plotutils import dimshow plt.clf() dimshow(blobs > 0, vmin=0, vmax=1) ax = plt.axis() for i, bs in enumerate(blobslices): sy, sx = bs by0, by1 = sy.start, sy.stop bx0, bx1 = sx.start, sx.stop plt.plot([bx0, bx0, bx1, bx1, bx0], [by0, by1, by1, by0, by0], 'r-') plt.text((bx0 + bx1) / 2., by0, '%i' % (i + 1), ha='center', va='bottom', color='r') plt.plot(T.ibx, T.iby, 'rx') for i, t in enumerate(T): plt.text(t.ibx, t.iby, 'src %i' % i, color='red', ha='left', va='center') plt.axis(ax) plt.title('Blobs') ps.savefig() # Find sets of sources within blobs blobsrcs = [] keepslices = [] blobmap = {} for blob in range(1, nblobs + 1): Isrcs, = np.nonzero(T.blob == blob) if len(Isrcs) == 0: blobmap[blob] = -1 continue blobmap[blob] = len(blobsrcs) blobsrcs.append(Isrcs) bslc = blobslices[blob - 1] keepslices.append(bslc) blobslices = keepslices # Find sources that do not belong to a blob and add them as # singleton "blobs"; otherwise they don't get optimized. # for sources outside the image bounds, what should we do? inblobs = np.zeros(len(T), bool) for Isrcs in blobsrcs: inblobs[Isrcs] = True noblobs = np.flatnonzero(np.logical_not(inblobs)) del inblobs #print(len(noblobs), 'sources are not in blobs') # Remap the "blobs" image so that empty regions are = -1 and the blob values # correspond to their indices in the "blobsrcs" list. if len(blobmap): maxblob = max(blobmap.keys()) else: maxblob = 0 maxblob = max(maxblob, blobs.max()) bm = np.zeros(maxblob + 1, int) for k, v in blobmap.items(): bm[k] = v bm[0] = -1 # Remap blob numbers blobs = bm[blobs] if plots: from astrometry.util.plotutils import dimshow plt.clf() dimshow(blobs > -1, vmin=0, vmax=1) ax = plt.axis() for i, bs in enumerate(blobslices): sy, sx = bs by0, by1 = sy.start, sy.stop bx0, bx1 = sx.start, sx.stop plt.plot([bx0, bx0, bx1, bx1, bx0], [by0, by1, by1, by0, by0], 'r-') plt.text((bx0 + bx1) / 2., by0, '%i' % (i + 1), ha='center', va='bottom', color='r') plt.plot(T.ibx, T.iby, 'rx') for i, t in enumerate(T): plt.text(t.ibx, t.iby, 'src %i' % i, color='red', ha='left', va='center') plt.axis(ax) plt.title('Blobs') ps.savefig() for j, Isrcs in enumerate(blobsrcs): for i in Isrcs: if (blobs[clipy[i], clipx[i]] != j): print( '---------------------------!!!-------------------------') print('Blob', j, 'sources', Isrcs) print('Source', i, 'coords x,y', T.ibx[i], T.iby[i]) print('Expected blob value', j, 'but got', blobs[clipy[i], clipx[i]]) T.blob = blobs[clipy, clipx] assert (len(blobsrcs) == len(blobslices)) return blobs, blobsrcs, blobslices
def Plot_PS_W(xps1, xps1_w, xps2, xps2_w, freqs, signal_amp, sampling_rate, fft_size, noise_type, window_type, m1, m2): xps1_dBm = xps1[0] xps1_w_dBm = xps1_w[0] xps2_dBm = xps2[0] xps2_w_dBm = xps2_w[0] xps1_watt = xps1[1] xps1_w_watt = xps1_w[1] xps2_watt = xps2[1] xps2_w_watt = xps2_w[1] a1 = max(xps1_dBm) a1_w = max(xps1_w_dBm) b1 = numpy.argmax(xps1_dBm) b1_w = numpy.argmax(xps1_w_dBm) a2 = max(xps2_dBm) a2_w = max(xps2_w_dBm) b2 = numpy.argmax(xps2_dBm) b2_w = numpy.argmax(xps2_w_dBm) signal_power = signal_amp * signal_amp / 2 accuracy1 = abs(xps1_watt[b1] - signal_power) / signal_power accuracy1_w = abs(xps1_w_watt[b1_w] - signal_power) / signal_power accuracy2 = abs(xps2_watt[b2] - signal_power) / signal_power accuracy2_w = abs(xps2_w_watt[b2_w] - signal_power) / signal_power print(a2, b2) pl.figure(figsize=(8, 6)) pl.subplot(211) pl.step(freqs, xps1_dBm, color='green', label=u"Without noise, without window") pl.step(freqs, xps1_w_dBm, label=u'Without noise, with %s window' % (window_type)) pl.ylabel(u'Power(dBm)') pl.annotate('%s Hz, %s dBm' % (b1 * sampling_rate / fft_size, a1), xy=(freqs[b1], xps1_dBm[b1]), xytext=(freqs[b1] + 5, xps1_dBm[b1] - 30)) props = dict(boxstyle='round', facecolor='none', alpha=0.5) pl.text(1500, -120, 'accuracy:%s\naccuracy_w:%s' % (accuracy1, accuracy1_w), size=10, bbox=props) pl.legend(fontsize=8) pl.title(u"Power Spectrum(fs=%sHz, n=%s)" % (sampling_rate, fft_size)) pl.subplot(212) pl.step(freqs, xps2_dBm, color='green', label=u"With %s, without window" % (noise_type)) pl.step(freqs, xps2_w_dBm, label=u'With %s, with %s window' % (noise_type, window_type)) pl.xlabel(u'Frequency(Hz)') pl.ylabel(u'Power(dBm)') pl.annotate('%s Hz, %s dBm' % (b2 * sampling_rate / fft_size, a2), xy=(freqs[b2], xps2_dBm[b2]), xytext=(freqs[b2] + 5, xps2_dBm[b2] - 10)) props = dict(boxstyle='round', facecolor='none', alpha=0.5) pl.text( 1500, -100, 'mean/lower-boundary:%s\nstd/higher-boundary:%s\naccuracy:%s\naccuracy_w:%s' % (m1, m2, accuracy2, accuracy2_w), size=10, bbox=props) pl.legend(fontsize=8) pl.savefig('power_spectrum_wind.pdf')
else: algorithm.fit(X) t1 = time.time() if hasattr(algorithm, 'labels_'): y_pred = algorithm.labels_.astype(np.int) else: y_pred = algorithm.predict(X) # plot pl.subplot(4, 6, plot_num) if i_dataset == 0: pl.title(str(algorithm).split('(')[0], size=18) pl.scatter(X[:, 0], X[:, 1], color=colors[y_pred].tolist(), s=10) if hasattr(algorithm, 'cluster_centers_'): centers = algorithm.cluster_centers_ center_colors = colors[:len(centers)] pl.scatter(centers[:, 0], centers[:, 1], s=100, c=center_colors) pl.xlim(-2, 2) pl.ylim(-2, 2) pl.xticks(()) pl.yticks(()) pl.text(.99, .01, ('%.2fs' % (t1 - t0)).lstrip('0'), transform=pl.gca().transAxes, size=15, horizontalalignment='right') plot_num += 1 pl.show()
def plot_monte_histograms(data, graph_name='gene_histogram.png', bins=20,\ normal_fit=True, normed=True, colors=None, linecolors=None, \ alpha=0.75, prob_axes=True, series_names=None, show_legend=False,\ y_label=None, x_label=None, **kwargs): """Outputs a histogram with multiple series (must provide a list of series). Differs from regular histogram in that p-value works w/exactly two datasets, where the first dataset is the reference set. Calculates the mean of the reference set, and compares this to the second set (which is assumed to contain the means of many runs producing data comparable to the data in the reference set). takes: data: list of arrays of values to plot (needs to be list of arrays so you can pass in arrays with different numbers of elements) graph_name: filename to write graph to bins: number of bins to use normal_fit: whether to show the normal curve best fitting the data normed: whether to normalize the histogram (e.g. so bars sum to 1) colors: list of colors to use for bars linecolors: list of colors to use for fit lines **kwargs are passed on to init_graph_display. """ rc('patch', linewidth=.2) rc('font', size='x-small') rc('axes', linewidth=.2) rc('axes', labelsize=7) rc('xtick', labelsize=7) rc('ytick', labelsize=7) if y_label is None: if normed: y_label = 'Frequency' else: y_label = 'Count' num_series = len(data) if colors is None: if num_series == 1: colors = ['white'] else: colors = standard_series_colors if linecolors is None: if num_series == 1: linecolors = ['red'] else: linecolors = standard_series_colors init_graph_display(prob_axes=prob_axes, y_label=y_label, **kwargs) all_patches = [] for i, d in enumerate(data): fc = colors[i % len(colors)] lc = linecolors[i % len(linecolors)] counts, x_bins, patches = hist(d, bins=bins, normed=normed, \ alpha=alpha, facecolor=fc) all_patches.append(patches[0]) if normal_fit and len(d) > 1: mu = mean(d) sigma = std(d) minv = min(d) maxv = max(d) bin_width = x_bins[-1] - x_bins[-2] #set range for normpdf normpdf_bins = arange(minv, maxv, 0.01 * (maxv - minv)) y = normpdf(normpdf_bins, mu, sigma) orig_area = sum(counts) * bin_width y = y * orig_area #normpdf area is 1 by default plot(normpdf_bins, y, linestyle='--', color=lc, linewidth=1) font = {'color': lc, 'fontsize': 11} text(mu, 0.0, "*", font, verticalalignment='center', horizontalalignment='center') xlabel(x_label) if show_legend and series_names: fp = FontProperties() fp.set_size('x-small') legend(all_patches, series_names, prop=fp) #output figure if graph name set -- otherwise, leave for further changes if graph_name is not None: savefig(graph_name)
fig = pl.figure(figsize=(8,5)) pl.rc('font',size=18) pl.rc('mathtext', default='regular') pl.xlim(-2000,2000) pl.ylim(-0.4,1.2) # pl.plot(CO54_vel, CO54_spectra_smooth_W/max(CO54_spectra_smooth_W), 'r-', label='CO(5-4)') # pl.plot(NII_vel, NII_spectra_smooth_W/max(NII_spectra_smooth_W), 'g-', label='[NII]') # pl.plot(CII_vel, CII_spectra_smooth_W/max(CII_spectra_smooth_W), 'b-', label='[CII]') pl.plot(CO54_vel, CO54_spectra_smooth_W/max(CO54_spectra_smooth_W), c='r',ls='steps', label='CO(5-4)') pl.plot(NII_vel, NII_spectra_smooth_W/max(NII_spectra_smooth_W), c='g',ls='steps', label='[NII]') pl.plot(CII_vel, CII_spectra_smooth_W/max(CII_spectra_smooth_W), c='b',ls='steps', label='[CII]') pl.text(-1750, 1, 'SPT0348-W', size=18) pl.ylabel('Fraction of peak') pl.xlabel(r'Radio velocity (km s$^{-1}$)') pl.legend(fontsize=14,loc=0,ncol=1,frameon=True,numpoints=1,scatterpoints=1) pl.savefig('spectra_smooth_W.pdf', bbox_inches='tight') pl.close() ################ # plot spectra E ################
eeg_data = csp_multi_class.apply(eeg_data)[1000:] labels = labels[1000:] n_samples = len(eeg_data) eeg_data = InstantaneousVarianceFilter(eeg_data.shape[1], n_taps=fs // 2).apply(eeg_data) #eeg_data = eeg_data/eeg_data.std(0) plt.plot(eeg_data / eeg_data.std(0) / 5 + np.arange(len(eeg_data[0]))) j = 0 for label in [1, 2, 6]: for ch in bands: j += 2 plt.text(0, j, str(label) + str(ch)) from sklearn import datasets from sklearn.multiclass import OneVsRestClassifier from sklearn.svm import LinearSVC from sklearn.linear_model import LogisticRegression #clf = OneVsRestClassifier(LogisticRegression(random_state=0, penalty='l1')) clf = MLPClassifier(hidden_layer_sizes=(), early_stopping=True, verbose=True) k = 2 * n_samples // 3 train_slice = slice(None, k) test_slice = slice(k, None) acc_train = sum( clf.fit(eeg_data[train_slice], labels[train_slice]).predict( eeg_data[train_slice]) == labels[train_slice]) / len( labels[train_slice])
''' Plot a sine function. ----------------------------------------------------------- (c) 2013 Allegra Via and Kristian Rother Licensed under the conditions of the Python License This code appears in section 16.4.1 of the book "Managing Biological Data with Python". ----------------------------------------------------------- ''' from pylab import figure, plot, text, axis, savefig import math figure() xdata = [0.1 * i for i in range(100)] ydata = [math.sin(j) for j in xdata] plot(xdata, ydata, 'kd', linewidth=1) text(4.8, 0, "$y = sin(x)$", horizontalalignment='center', fontsize=20) axis([0, 3 * math.pi, -1.2, 1.2]) savefig('sinfunc.png')
for j in range(16): new_pos = seq.predict(track[np.newaxis, ::, ::, ::, ::]) new = new_pos[::, -1, ::, ::, ::] track = np.concatenate((track, new), axis=0) # And then compare the predictions # to the ground truth track2 = noisy_movies[which][::, ::, ::, ::] for i in range(15): fig = plt.figure(figsize=(10, 5)) ax = fig.add_subplot(121) if i >= 7: ax.text(1, 3, 'Predictions !', fontsize=20, color='w') else: ax.text(1, 3, 'Initial trajectory', fontsize=20) toplot = track[i, ::, ::, 0] plt.imshow(toplot) ax = fig.add_subplot(122) plt.text(1, 3, 'Ground truth', fontsize=20) toplot = track2[i, ::, ::, 0] if i >= 2: toplot = shifted_movies[which][i - 1, ::, ::, 0] plt.imshow(toplot) plt.savefig('%i_animate.png' % (i + 1))
def graphPer(StatTable, Fields=[ '_Efficiency', '_NavConf', '_DirRtng', '_TgtFound', ], TestName='', StatType='A_', StdType=None, Range=None): if not graph: return Fig = pylab.figure() Names = [] maxStat = 0 if Range: l = len(Range) + 1 FollowerNames = [StatTable['/FollowerID'][i] for i in Range] else: l = len(StatTable['/FollowerID']) + 1 FollowerNames = StatTable['/FollowerID'] for field, Line in zip(Fields, [ '-.', ':', '-', '--', ]): if len(FollowerNames) > 6: offset = .5 else: offset = 1 for Giver, Fmt in zip(LogAnalyzer.Directors, Formats): Name = StatType + Giver + field Names.append(Name) if Range: StatX = [StatTable[Name][i] for i in Range] if StdType: StatY = [ StatTable[StdType + Giver + field][i] for i in Range ] else: StatX = StatTable[Name] if StdType: StatY = StatTable[StdType + Giver + field] Length = len(StatX) if StdType: maxStat = max(maxStat, max(StatX + StatY)) else: maxStat = max(maxStat, max(StatX)) Xvals = [x + offset for x in range(Length)] if StdType: pylab.errorbar(Xvals, StatX, yerr=StatY, fmt=Fmt) #+Line) else: pylab.plot(Xvals, StatX, Fmt) #+Line) #pylab.semilogy(Xvals,StatX,Fmt+Line) if len(FollowerNames) < 6: offset += 0.1 if Range: l = len(Range) + 1 FollowerNames = [StatTable['/FollowerID'][i] for i in Range] else: l = len(StatTable['/FollowerID']) + 1 FollowerNames = StatTable['/FollowerID'] if len(FollowerNames) > 6: for num, Follower in zip(range(1, l), FollowerNames): pylab.text(num, -0.5, Follower[0:7], rotation='vertical') pylab.text(num, 6.5, Follower[0:7], rotation='vertical') else: for num, Follower in zip(range(1, l), FollowerNames): pylab.text(num, -0.5, Follower, rotation='horizontal') pylab.text(num, 6.5, Follower, rotation='horizontal') pylab.axis([0, Length * 2, 0, max(int(math.ceil(maxStat)), 6)]) pylab.grid(1) Axis = pylab.gca() Axis.set_yticks(range(int(math.ceil(maxStat + 2)))) #Axis.set_xticks([]) adornGraph( Title='Results per direction giver: ' + StatType + TestName, Filename='Results_per_' + StatType + TestName, Legend=Names, )
def ResVectorPlot(root='./', align='align/align_t', poly='polyfit_d/fit', points='points_d/', useAccFits=False, numEpochs=7, TargetName='OB120169_R', radCut_pix=10000, magCut=22): print 'Creating quiver plot of residuals...' s = starset.StarSet(root + align) s.loadPolyfit(root + poly, accel=0, arcsec=0) try: pointsFile = root + points + TargetName + '.points' if os.path.exists(pointsFile + '.orig'): pointsTab = Table.read(pointsFile + '.orig', format='ascii') else: pointsTab = Table.read(pointsFile, format='ascii') times = pointsTab[pointsTab.colnames[0]] except: print 'Star ' + TargetName + ' not in list' py.clf() # for ee in range(1): py.close(1) py.figure(1, figsize=(10, 10)) for ee in range(numEpochs): # Observed data x = s.getArrayFromEpoch(ee, 'xpix') y = s.getArrayFromEpoch(ee, 'ypix') m = s.getArrayFromEpoch(ee, 'mag') rad = np.hypot(x - 512, y - 512) good = (rad < radCut_pix) & (m < magCut) idx = np.where(good)[0] stars = [s.stars[i] for i in idx] x = x[idx] y = y[idx] rad = rad[idx] Nstars = len(x) x_fit = np.zeros(Nstars, dtype=float) y_fit = np.zeros(Nstars, dtype=float) residsX = np.zeros(Nstars, dtype=float) residsY = np.zeros(Nstars, dtype=float) idx2 = [] for i in range(Nstars): fitx = stars[i].fitXv fity = stars[i].fitYv StarName = stars[i].name dt = times[ee] - fitx.t0 fitLineX = fitx.p + (fitx.v * dt) fitSigX = np.sqrt(fitx.perr**2 + (dt * fitx.verr)**2) fitLineY = fity.p + (fity.v * dt) fitSigY = np.sqrt(fity.perr**2 + (dt * fity.verr)**2) x_fit[i] = fitLineX y_fit[i] = fitLineY residsX[i] = x[i] - fitLineX residsY[i] = y[i] - fitLineY idx = np.where((np.abs(residsX) < 10.0) & (np.abs(residsY) < 10.0))[0] py.subplot(3, 3, ee + 1) py.ylim(0, 1100) py.xlim(0, 1100) py.yticks(fontsize=10) py.xticks([200, 400, 600, 800, 1000], fontsize=10) q = py.quiver(x_fit[idx], y_fit[idx], residsX[idx], residsY[idx], scale_units='width', scale=0.5) py.quiver([850, 0], [100, 0], [0.05, 0.05], [0, 0], color='red', scale=0.5, scale_units='width') py.text(600, 100, '0.5 mas', color='red', fontsize=8) # py.quiverkey(q, 0.85, 0.1, 0.02, '0.2 mas', color='red', fontsize=6) fname = 'quiverplot_all.png' py.subplots_adjust(bottom=0.1, right=0.97, top=0.97, left=0.05) if os.path.exists(root + 'plots/' + fname): os.remove(root + 'plots/' + fname) py.savefig(root + 'plots/' + fname) return
pl.figure(2) pl.subplot("111", aspect=1.0) pl.errorbar([np.mean(cdtwResult)], [np.mean(gemResult)], [np.std(cdtwResult)], [np.std(gemResult)], c="grey") pl.plot([np.mean(cdtwResult)], [np.mean(gemResult)], "o", c="blue") pl.figure(1) pl.subplot("111", aspect=1.0) #pl.title("Texas Sharpshooter Plot") pl.plot([-0.25, 0.25], [0, 0], c="black") pl.plot([0, 0], [-0.25, 0.25], c="black") pl.xlabel('relative error difference on training set') pl.ylabel('relative error difference on test set') pl.axis((-0.25, 0.25, -0.25, 0.25)) pl.text(+0.2, +0.2, r'TP', fontsize=20) pl.text(-0.23, -0.23, r'TN', fontsize=20) pl.text(+0.2, -0.23, r'FP', fontsize=20) pl.text(-0.23, +0.2, r'FN', fontsize=20) pl.annotate('Lighting7', xy=(-0.12, -0.1), xytext=(-0.244, -0.04), arrowprops=dict(facecolor='lightgrey', shrink=0.05), color="grey") pl.annotate('Fish', xy=(0.095, 0.115), xytext=(0.01, 0.16), arrowprops=dict(facecolor='lightgrey', shrink=0.05), color="grey") pl.figure(2)
cluster_center = k_means_cluster_centers[k] ax.plot(X[my_members, 0], X[my_members, 1], 'w', markerfacecolor=col, marker='.') ax.plot(cluster_center[0], cluster_center[1], 'o', markerfacecolor=col, markeredgecolor='k', markersize=6) ax.set_title('KMeans') ax.set_xticks(()) ax.set_yticks(()) pl.text(-3.5, 1.8, 'train time: %.2fs\ninertia: %f' % (t_batch, k_means.inertia_)) # MiniBatchKMeans ax = fig.add_subplot(1, 3, 2) for k, col in zip(range(n_clusters), colors): my_members = mbk_means_labels == order[k] cluster_center = mbk_means_cluster_centers[order[k]] ax.plot(X[my_members, 0], X[my_members, 1], 'w', markerfacecolor=col, marker='.') ax.plot(cluster_center[0], cluster_center[1], 'o', markerfacecolor=col,
minX, minY = 300, 800 sizeX, sizeY = 215, 200 maxX, maxY = minX+sizeX, minY+sizeY data_raw = data[minX:maxX, minY:maxY] #sizeX, sizeY = data_raw.shape # Create x and y indices x = np.arange(0, sizeX) y = np.arange(0, sizeY) x, y = np.meshgrid(x, y) fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(13, 8)) plt.gcf().canvas.set_window_title('2D Gaussian Fitting') ax1.set_title("raw 2D Gaussian") ax2.set_title("fitted 2D Gaussian") popt, pcov = opt.curve_fit(twoD_Gaussian, (x, y), np.ravel(data_raw), p0=initial_guess()) data_fitted = twoD_Gaussian((x, y), *popt) print(getResult(popt)) ax2.imshow(data_raw.reshape(sizeX, sizeY), cmap=plt.cm.jet) ax2.contour(y.transpose(), x.transpose(), data_fitted.reshape(sizeX, sizeY), 4, colors='w') ax1.imshow(data_raw) plt.text(25, 170, getResult(popt)[0], bbox=dict(facecolor='yellow', alpha=0.5), fontsize=12) plt.text(25, 185, getResult(popt)[1], bbox=dict(facecolor='yellow', alpha=0.5), fontsize=12) plt.show()
def plotcc(current_data): from pylab import plot, text plot([235.8162], [41.745616], 'wo') text(235.8, 41.9, 'Cr.City', color='w', fontsize=10)
def plot(self, fig_number=322): """plot the stored data in figure fig_number. Dependencies: matlabplotlib/pylab. Example ======= :: >> import barecmaes2 as cma >> es = cma.CMAES(3 * [0.1], 1) >> logger = cma.CMAESDataLogger().register(es) >> while not es.stop(): >> X = es.ask() >> es.tell(X, [bc.Fcts.elli(x) for x in X]) >> logger.add() >> logger.plot() """ if pylab is None: return None pylab.figure(fig_number) from pylab import text, hold, plot, ylabel, grid, semilogy, \ xlabel, draw, show, subplot dat = self.dat # dictionary with entries as given in __init__ if not dat: return try: # a hack to get the presumable population size lambda strpopsize = ' (popsize~' + str(dat['eval'][-2] - dat['eval'][-3]) + ')' except IndexError: strpopsize = '' # plot fit, Delta fit, sigma subplot(221) hold(False) if dat['fit'][0] is None: dat['fit'][0] = dat['fit'][1] # should be reverted later, but let's be lazy assert dat['fit'].count(None) == 0 dmin = min(dat['fit']) i = dat['fit'].index(dmin) dat['fit'][i] = max(dat['fit']) dmin2 = min(dat['fit']) dat['fit'][i] = dmin semilogy(dat['iter'], [d - dmin + 1e-19 if d >= dmin2 else dmin2 - dmin for d in dat['fit']], 'c', linewidth=1) hold(True) semilogy(dat['iter'], [abs(d) for d in dat['fit']], 'b') semilogy(dat['iter'][i], abs(dmin), 'r*') semilogy(dat['iter'], dat['sig'], 'g') ylabel('f-value, Delta-f-value, sigma') grid(True) # plot xmean subplot(222) hold(False) plot(dat['iter'], dat['xm']) hold(True) for i in range(len(dat['xm'][-1])): text(dat['iter'][0], dat['xm'][0][i], str(i)) text(dat['iter'][-1], dat['xm'][-1][i], str(i)) ylabel('mean solution', ha='center') grid(True) # plot D subplot(223) hold(False) semilogy(dat['iter'], dat['D'], 'm') xlabel('iterations' + strpopsize) ylabel('axes lengths') grid(True) # plot stds subplot(224) # if len(gcf().axes) > 1: # sca(pylab.gcf().axes[1]) # else: # twinx() hold(False) semilogy(dat['iter'], dat['stds']) for i in range(len(dat['stds'][-1])): text(dat['iter'][-1], dat['stds'][-1][i], str(i)) ylabel('coordinate stds disregarding sigma', ha='center') grid(True) xlabel('iterations' + strpopsize) sys.stdout.flush() draw() show() CMAESDataLogger.plotted += 1
]) error_ve = np.array([ 6.84474776e-07, 4.39461983e-07, 3.23219057e-07, 2.25775400e-07, 1.64093065e-07, 1.26171326e-07, 9.91391890e-08, 7.91353527e-08, 6.54439517e-08 ]) error_vi = np.array([ 4.34199278e-06, 3.12494787e-06, 2.28197680e-06, 1.65532127e-06, 1.22502294e-06, 9.39663685e-07, 7.40306682e-07, 5.96650924e-07, 4.90344023e-07 ]) pl.loglog(N, error_ve, '-o', label=r'$v_e$') pl.loglog(N, error_vi, '-o', label=r'$v_i$') pl.loglog(N, error_E, '-o', label=r'$E$') pl.loglog(N, error_B, '-o', label=r'$B$') pl.loglog(N[-4:], 2e-2 / N[-4:]**2, '--', color='black', label=r'$\mathcal{O}(N^{-2})$') pl.text(2**7, 1.3e-6, r'$\mathcal{O}(N^{-2})$', fontsize=20) pl.text(2**6 - 8, 2.5e-7, r'$e^-$', fontsize=20) pl.text(2**6 - 8, 3.2e-6, r'$p^+$', fontsize=20) pl.text(2**6 - 8, 4.5e-7, r'$E$', fontsize=20) pl.text(2**6 - 8, 1.3e-6, r'$B$', fontsize=20) pl.xlabel(r'$N$') pl.ylabel('Error') pl.xscale('log', basex=2) pl.savefig('plot.png', bbox_inches='tight')
# first, create a figure. fig = figure() # background color will be used for 'wet' areas. fig.add_axes([0.1, 0.1, 0.8, 0.8], axisbg='aqua') # draw colored markers. # use zorder=10 to make sure markers are drawn last. # (otherwise they are covered up when continents are filled) #m.scatter(x,y,25,z,cmap=cm.jet,marker='o',faceted=False,zorder=10) # create a list of strings containing z values # or, plot actual numbers as color-coded text strings. zn = ['%2i' % zz for zz in z] # plot numbers on map, colored by value. for numstr, zval, xpt, ypt in zip(zn, z, x, y): # only plot values inside map region. if xpt > m.xmin and xpt < m.xmax and ypt > m.ymin and ypt < m.ymax: hexcolor = rgb2hex(cm.jet(zval / 100.)[:3]) text(xpt, ypt, numstr, fontsize=9, weight='bold', color=hexcolor) # draw coasts and fill continents. m.drawcoastlines(linewidth=0.5) m.fillcontinents(color='coral') # draw parallels and meridians. delat = 20. circles = arange(0.,90.,delat).tolist()+\ arange(-delat,-90,-delat).tolist() m.drawparallels(circles) delon = 45. meridians = arange(0, 360, delon) m.drawmeridians(meridians, labels=[1, 1, 1, 1]) title('Random Points', y=1.075) show()
print('no solution found') sys.exit() pyschedule.plotters.matplotlib.plot(S,resource_height=1.0,) # plot tours import pylab sol = S.solution() blue_tour = [ coords[str(city)] for (city,resource,start,end) in sol if str(city) in coords and str(resource) == 'blue' ] pylab.plot([ x for x,y in blue_tour ],[ y for x,y in blue_tour],linewidth=2.0,color='blue') red_tour = [ coords[str(city)] for (city,resource,start,end) in sol if str(city) in coords if str(resource) == 'red' ] pylab.plot([ x for x,y in red_tour ],[ y for x,y in red_tour],linewidth=2.0,color='red') # plot city names for city in cities : pylab.text(coords[str(city)][0], coords[str(city)][1], city,color='black',fontsize=10) pylab.title('VRP Germany') pylab.show()
pl.figure(6) pl.subplot(7, 7, cntfit) pl.plot(xs, fits[:, 4], '*-') pl.plot(xs[ok], fits[ok, 4], 'or-') pl.ylim([2.9, 4]) cntfit += 1 delta = (extent[0] + ff[0]) - pos[0] poss.append(extent[0] + ff[0]) deltas.append(delta) q = np.degrees(ff[1]) - np.degrees(CSU.rotation) qs.append(q) pl.figure(1) pl.text(pos[0], pos[1], 'b%2.0i: w=%3.2f p=%5.2f q=%3.2f d=%1.3f' % (bar, np.mean(fits[:, 4]), extent[0] + ff[0], q, delta), fontsize=11, family='monospace', horizontalalignment='center') pl.xlim([0, 2048]) pl.ylim([0, 2048]) means = np.array(means) sds = np.array(sds) pl.draw()
def plot_topo_file(topoplotdata): """ Read in a topo or bathy file and produce a pcolor map. """ import os import pylab from clawpack.clawutil.data import ClawData fname = topoplotdata.fname topotype = topoplotdata.topotype if topoplotdata.climits: # deprecated option cmin = topoplotdata.climits[0] cmax = topoplotdata.climits[1] else: cmin = topoplotdata.cmin cmax = topoplotdata.cmax figno = topoplotdata.figno addcolorbar = topoplotdata.addcolorbar addcontour = topoplotdata.addcontour contour_levels = topoplotdata.contour_levels xlimits = topoplotdata.xlimits ylimits = topoplotdata.ylimits coarsen = topoplotdata.coarsen imshow = topoplotdata.imshow gridedges_show = topoplotdata.gridedges_show neg_cmap = topoplotdata.neg_cmap pos_cmap = topoplotdata.pos_cmap cmap = topoplotdata.cmap print_fname = topoplotdata.print_fname if neg_cmap is None: neg_cmap = colormaps.make_colormap({ cmin: [0.3, 0.2, 0.1], 0: [0.95, 0.9, 0.7] }) if pos_cmap is None: pos_cmap = colormaps.make_colormap({ 0: [.5, .7, 0], cmax: [.2, .5, .2] }) if cmap is None: cmap = colormaps.make_colormap({ -1: [0.3, 0.2, 0.1], -0.00001: [0.95, 0.9, 0.7], 0.00001: [.5, .7, 0], 1: [.2, .5, .2] }) #cmap = colormaps.make_colormap({-1:[0,0,1],0:[1,1,1],1:[1,0,0]}) if abs(topotype) == 1: X, Y, topo = topotools.topofile2griddata(fname, topotype) topo = pylab.flipud(topo) Y = pylab.flipud(Y) x = X[0, :] y = Y[:, 0] xllcorner = x[0] yllcorner = y[0] cellsize = x[1] - x[0] elif abs(topotype) == 3: file = open(fname, 'r') lines = file.readlines() ncols = int(lines[0].split()[0]) nrows = int(lines[1].split()[0]) xllcorner = float(lines[2].split()[0]) yllcorner = float(lines[3].split()[0]) cellsize = float(lines[4].split()[0]) NODATA_value = int(lines[5].split()[0]) print "Loading file ", fname print " nrows = %i, ncols = %i" % (nrows, ncols) topo = pylab.loadtxt(fname, skiprows=6, dtype=float) print " Done loading" if 0: topo = [] for i in range(nrows): topo.append(pylab.array(lines[6 + i], )) print '+++ topo = ', topo topo = pylab.array(topo) topo = pylab.flipud(topo) x = pylab.linspace(xllcorner, xllcorner + ncols * cellsize, ncols) y = pylab.linspace(yllcorner, yllcorner + nrows * cellsize, nrows) print "Shape of x, y, topo: ", x.shape, y.shape, topo.shape else: raise Exception("*** Only topotypes 1 and 3 supported so far") if coarsen > 1: topo = topo[slice(0, nrows, coarsen), slice(0, ncols, coarsen)] x = x[slice(0, ncols, coarsen)] y = y[slice(0, nrows, coarsen)] print "Shapes after coarsening: ", x.shape, y.shape, topo.shape if topotype < 0: topo = -topo if figno: pylab.figure(figno) if topoplotdata.imshow: color_norm = Normalize(cmin, cmax, clip=True) xylimits = (x[0], x[-1], y[0], y[-1]) #pylab.imshow(pylab.flipud(topo.T), extent=xylimits, \ pylab.imshow(pylab.flipud(topo), extent=xylimits, \ cmap=cmap, interpolation='nearest', \ norm=color_norm) #pylab.clim([cmin,cmax]) if addcolorbar: pylab.colorbar() else: neg_topo = ma.masked_where(topo > 0., topo) all_masked = (ma.count(neg_topo) == 0) if not all_masked: pylab.pcolormesh(x, y, neg_topo, cmap=neg_cmap) pylab.clim([cmin, 0]) if addcolorbar: pylab.colorbar() pos_topo = ma.masked_where(topo < 0., topo) all_masked = (ma.count(pos_topo) == 0) if not all_masked: pylab.pcolormesh(x, y, pos_topo, cmap=pos_cmap) pylab.clim([0, cmax]) if addcolorbar: pylab.colorbar() pylab.axis('scaled') if addcontour: pylab.contour(x, y, topo, levels=contour_levels, colors='k') patchedges_show = True if patchedges_show: pylab.plot([x[0], x[-1]], [y[0], y[0]], 'k') pylab.plot([x[0], x[-1]], [y[-1], y[-1]], 'k') pylab.plot([x[0], x[0]], [y[0], y[-1]], 'k') pylab.plot([x[-1], x[-1]], [y[0], y[-1]], 'k') if print_fname: fname2 = os.path.splitext(fname)[0] pylab.text(xllcorner + cellsize, yllcorner + cellsize, fname2, color='m') topodata = ClawData() topodata.x = x topodata.y = y topodata.topo = topo return topodata
def fit_exp(o_xdata, o_ydata, o_yerr): import scipy, pylab a = scipy.array(o_ydata) a, b, varp = pylab.hist(a, bins=scipy.arange(-2, 2, 0.05)) pylab.xlabel("shear") pylab.ylabel("Number of Galaxies") #pylab.show() o_xdata = scipy.array(o_xdata) o_ydata = scipy.array(o_ydata) o_yerr = scipy.array(o_yerr) print o_yerr #o_xdata = o_xdata[abs(o_ydata) > 0.05] #o_yerr = o_yerr[abs(o_ydata) > 0.05] #o_ydata = o_ydata[abs(o_ydata) > 0.05] both = 0 As = [] for z in range(25): xdata = [] ydata = [] yerr = [] for i in range(len(o_xdata)): rand = int(random.random() * len(o_xdata)) - 1 #print rand, len(o_xdata) xdata.append(o_xdata[rand]) ydata.append(o_ydata[rand]) yerr.append(o_yerr[rand]) xdata = scipy.array(xdata) ydata = scipy.array(ydata) ########## # Fitting the data -- Least Squares Method ########## # Power-law fitting is best done by first converting # to a linear equation and then fitting to a straight line. # # y = a * x^b # log(y) = log(a) + b*log(x) # powerlaw = lambda x, amp, index: amp * (x**index) #print xdata #print ydata ydata = ydata / (scipy.ones(len(ydata)) + abs(ydata)) # define our (line) fitting function #fitfunc = lambda p, x: p[0]*x**p[1] if both: fitfunc = lambda p, x: p[0] * x**p[1] errfunc = lambda p, x, y, err: (y - fitfunc(p, x)) / err pinit = [1., -1.] else: fitfunc = lambda p, x: p[0] * x**-1. errfunc = lambda p, x, y, err: (y - fitfunc(p, x)) / err pinit = [1.] #, -1.0] #ydataerr = scipy.ones(len(ydata)) * 0.3 out = scipy.optimize.leastsq(errfunc, pinit, args=(scipy.array(xdata), scipy.array(ydata), scipy.array(yerr)), full_output=1) if both: pfinal = out[0] covar = out[1] print pfinal print covar index = pfinal[1] amp = pfinal[0] ampErr = covar[0][0] indexErr = covar[1][1] else: pfinal = out[0] covar = out[1] print pfinal print covar index = -1. #pfinal[1] amp = pfinal #[0] ampErr = covar[0][0]**0.5 indexErr = 0 #covar #indexErr = scipy.sqrt( covar[0][0] ) #ampErr = scipy.sqrt( covar[1][1] ) * amp ########## # Plotting data ########## import pylab pylab.clf() #pylab.subplot(2, 1, 1) pylab.errorbar(xdata, ydata, yerr=yerr, fmt='k.') # Data from copy import copy xdataline = copy(xdata) xdataline.sort() pylab.plot(xdataline, powerlaw(xdataline, amp, index)) # Fit pylab.text(5, 6.5, 'Ampli = %5.2f +/- %5.2f' % (amp, ampErr)) pylab.text(5, 5.5, 'Index = %5.2f +/- %5.2f' % (index, indexErr)) pylab.title('Best Fit Power Law') pylab.xlabel('X') pylab.ylabel('Y') #pylab.subplot(2, 1, 2) #pylab.loglog(xdataline, powerlaw(xdataline, amp, index)) #pylab.errorbar(xdata, ydata, yerr=ydataerr, fmt='k.') # Data #pylab.xlabel('X (log scale)') #pylab.ylabel('Y (log scale)') #pylab.xlim(1.0, 11) print z if both: pylab.show() #pylab.show() pylab.clf() As.append(amp) As = scipy.array(As) return As.mean(), As.std()
while True: print 'Turn', n, 'Prepare to engage.' bombX = float(raw_input("Select x-coordinate of shell:")) bombY = float(raw_input("Select y-coordinate of shell:")) n = n + 1 flyerOne.advance() flyerOne.makePathX() flyerOne.makePathY() pylab.plot(flyerOne.pathX, flyerOne.pathY, 'ro') pylab.plot(bombX, bombY, 'bo') # inform player where his bomb is pylab.text(5,10,'Sensor position of bomb at' + str(bombX) + \ ',' + str(bombY), size = 'x-large') pylab.show() pylab.ylim(0, 100) # the gameplay boundary pylab.xlim(0, 100) if (bombX - float(flyerOne.positionList[0]))**2 +\ (bombY - float(flyerOne.positionList[1]))**2 < frag_range: pylab.text(40, 40, 'HOSTILE TARGET DESTROYED', size='x-large') print "You win! threat neutralised" break if float(flyerOne.positionList[0]) > 100 or float( flyerOne.positionList[1]) > 100: print "You are dead! threat bypassed your defenses" break # Game over for n00bs if n > 1000: break # Safety fuse during development process
widths=d_widths, gap=0.005, scale=False) for i in range(len(A2)): Ai = [x for x in A2[i] if x > 0] y = [x / 2.0 for x in Ai] for j in range(len(Ai)): if j > 0: yy = y[j] + np.sum(Ai[0:j]) else: yy = y[j] if int(Ai[j]) > 10: pl.text(0.5 * i - 0.02, yy - 1.2, str(Ai[j]), fontsize=12, zorder=10) ax1.axes.get_xaxis().set_visible(False) ax1.set_title('n = 5, number of graphs per u = 100', multialignment='center') ax1.set_ylabel('u = 2', rotation=90) #graph subplot for u = 3 A3 = makeA(2) mycolorlistu3 = [ (0.9769448735268946, 0.6468696110452877, 0.2151452804329661), (0.37645505989354233, 0.6875228836084111, 0.30496111115768654), (0.6151274326753975, 0.496189476149738, 0.15244053646953548), (0.1562085876188265, 0.44786703170340336, 0.9887241674046707), (0.4210506028639077, 0.2200011667972023, 0.37841949185273394),