def getsamples(era,channel="",tag="",dtype=[],filter=[],veto=[],moddict={},verb=0): """Help function to get samples from a sample list and filter if needed.""" import TauFW.PicoProducer.tools.config as GLOB CONFIG = GLOB.getconfig(verb=verb) filters = filter if not filter or isinstance(filter,list) else [filter] vetoes = veto if not veto or isinstance(veto,list) else [veto] dtypes = dtype if not dtype or isinstance(dtype,list) else [dtype] sampfile = ensurefile("samples",repkey(CONFIG.eras[era],ERA=era,CHANNEL=channel,TAG=tag)) samppath = sampfile.replace('.py','').replace('/','.') if samppath not in moddict: moddict[samppath] = importlib.import_module(samppath) # save time by loading once if not hasattr(moddict[samppath],'samples'): LOG.throw(IOError,"Module '%s' must have a list of Sample objects called 'samples'!"%(samppath)) samplelist = moddict[samppath].samples samples = [ ] sampledict = { } # ensure for unique names LOG.verb("getsamples: samplelist=%r"%(samplelist),verb,3) for sample in samplelist: if filters and not sample.match(filters,verb): continue if vetoes and sample.match(vetoes,verb): continue if dtypes and sample.dtype not in dtypes: continue if channel and sample.channels and not any(fnmatch(channel,c) for c in sample.channels): continue if sample.name in sampledict: LOG.throw(IOError,"Sample short names should be unique. Found two samples '%s'!\n\t%s\n\t%s"%( sample.name,','.join(sampledict[sample.name].paths),','.join(sample.paths))) if 'skim' in channel and sample.dosplit: # split samples with multiple DAS dataset paths, and submit as separate jobs for subsample in sample.split(): samples.append(subsample) # keep correspondence sample to one sample in DAS else: samples.append(sample) sampledict[sample.name] = sample return samples
dest='verbosity', type=int, nargs='?', const=1, default=0, action='store') args = parser.parse_args() # SETTING era = args.era # e.g. '2017', 'UL2017', ... year = getyear(era) # integer year, e.g. 2017 modname = args.module # main module to run channel = args.channel # channel if channel: import TauFW.PicoProducer.tools.config as GLOB CONFIG = GLOB.getconfig(verb=0) if not modname: assert channel in CONFIG.channels, "Did not find channel '%s' in configuration. Available channels: %s" % ( channel, CONFIG.channels) modname = CONFIG.channels[args.channel] else: if not modname: modname = "ModuleMuTauSimple" channel = modname dtype = args.dtype # data type ('data', 'mc', 'embed') outdir = ensuredir(args.outdir) # directory to create output copydir = args.copydir # directory to copy output to at end firstevt = args.firstevt # index of first event to run maxevts = args.maxevts # maximum number of events to run nfiles = 1 if maxevts > 0 else -1 # maximum number of files to run tag = args.tag # postfix tag of job output file
def main(): eras = args.eras periods = cleanEras(args.periods) channel = args.channel types = args.types verbosity = args.verbosity minbiases = [69.2] if periods else [69.2, 80.0, 69.2 * 1.046, 69.2 * 0.954] for era in args.eras: year = getyear(era) mcfilename = "MC_PileUp_%s.root" % (era) jsondir = os.path.join(datadir, 'json', str(year)) pileup = os.path.join(jsondir, "pileup_latest.txt") CMSStyle.setCMSEra(year) if era == '2016': # https://twiki.cern.ch/twiki/bin/viewauth/CMS/PdmV2017Analysis # /afs/cern.ch/cms/CAF/CMSCOMM/COMM_DQM/certification/Collisions16/13TeV/ReReco/Final/Cert_271036-284044_13TeV_23Sep2016ReReco_Collisions16_JSON.txt" # /afs/cern.ch/cms/CAF/CMSCOMM/COMM_DQM/certification/Collisions16/13TeV/Final/Cert_271036-284044_13TeV_PromptReco_Collisions16_JSON.txt # /afs/cern.ch/cms/CAF/CMSCOMM/COMM_DQM/certification/Collisions16/13TeV/PileUp/pileup_latest.txt JSON = os.path.join( jsondir, "Cert_271036-284044_13TeV_ReReco_07Aug2017_Collisions16_JSON.txt" ) datasets = { 'B': (272007, 275376), 'C': (275657, 276283), 'D': (276315, 276811), 'E': (276831, 277420), 'F': (277772, 278808), 'G': (278820, 280385), 'H': (280919, 284044), } campaign = "Moriond17" samples = [ ( 'TT', "TT", ), ( 'DY', "DYJetsToLL_M-10to50", ), ( 'DY', "DYJetsToLL_M-50", ), ( 'DY', "DY1JetsToLL_M-50", ), ( 'DY', "DY2JetsToLL_M-50", ), ( 'DY', "DY3JetsToLL_M-50", ), ( 'WJ', "WJetsToLNu", ), ( 'WJ', "W1JetsToLNu", ), ( 'WJ', "W2JetsToLNu", ), ( 'WJ', "W3JetsToLNu", ), ( 'WJ', "W4JetsToLNu", ), ( 'ST', "ST_tW_top", ), ( 'ST', "ST_tW_antitop", ), ( 'ST', "ST_t-channel_top", ), ( 'ST', "ST_t-channel_antitop", ), #( 'ST', "ST_s-channel", ), ( 'VV', "WW", ), ( 'VV', "WZ", ), ( 'VV', "ZZ", ), ] elif '2017' in era: # https://twiki.cern.ch/twiki/bin/viewauth/CMS/PdmV2017Analysis # /afs/cern.ch/cms/CAF/CMSCOMM/COMM_DQM/certification/Collisions16/13TeV/Final/Cert_271036-284044_13TeV_PromptReco_Collisions16_JSON.txt # /afs/cern.ch/cms/CAF/CMSCOMM/COMM_DQM/certification/Collisions17/13TeV/PileUp/pileup_latest.txt JSON = os.path.join( jsondir, "Cert_294927-306462_13TeV_PromptReco_Collisions17_JSON.txt") datasets = { 'B': (297020, 299329), 'C': (299337, 302029), 'D': (302030, 303434), 'E': (303435, 304826), 'F': (304911, 306462), } samples_bug = [] samples_fix = [] if 'UL' in era: campaign = "Summer19" samples_fix = [ #( 'DY', "DYJetsToLL_M-10to50", ), ( 'DY', "DYJetsToLL_M-50", ), ( 'DY', "DY1JetsToLL_M-50", ), ( 'DY', "DY2JetsToLL_M-50", ), ( 'DY', "DY3JetsToLL_M-50", ), ( 'DY', "DY4JetsToLL_M-50", ), #( 'TT', "TTTo2L2Nu", ), ( 'TT', "TTToHadronic", ), #( 'TT', "TTToSemiLeptonic", ), ( 'WJ', "WJetsToLNu", ), ( 'WJ', "W1JetsToLNu", ), ( 'WJ', "W2JetsToLNu", ), ( 'WJ', "W3JetsToLNu", ), ( 'WJ', "W4JetsToLNu", ), ( 'ST', "ST_tW_top", ), ( 'ST', "ST_tW_antitop", ), ( 'ST', "ST_t-channel_top", ), ( 'ST', "ST_t-channel_antitop", ), #( 'ST', "ST_s-channel", ), #( 'VV', "WW", ), #( 'VV', "WZ", ), #( 'VV', "ZZ", ), ] else: campaign = "Winter17_V2" samples_bug = [ ( 'DY', "DYJetsToLL_M-50", ), ( 'WJ', "W3JetsToLNu", ), ( 'VV', "WZ", ), ] samples_fix = [ ( 'DY', "DYJetsToLL_M-10to50", ), ( 'DY', "DY1JetsToLL_M-50", ), ( 'DY', "DY2JetsToLL_M-50", ), ( 'DY', "DY3JetsToLL_M-50", ), ( 'DY', "DY4JetsToLL_M-50", ), ( 'TT', "TTTo2L2Nu", ), ( 'TT', "TTToHadronic", ), ( 'TT', "TTToSemiLeptonic", ), ( 'WJ', "WJetsToLNu", ), ( 'WJ', "W1JetsToLNu", ), ( 'WJ', "W2JetsToLNu", ), ( 'WJ', "W4JetsToLNu", ), ( 'ST', "ST_tW_top", ), ( 'ST', "ST_tW_antitop", ), ( 'ST', "ST_t-channel_top", ), ( 'ST', "ST_t-channel_antitop", ), #( 'ST', "ST_s-channel", ), ( 'VV', "WW", ), ( 'VV', "ZZ", ), ] samples = samples_bug + samples_fix else: # https://twiki.cern.ch/twiki/bin/viewauth/CMS/PdmV2018Analysis # /afs/cern.ch/cms/CAF/CMSCOMM/COMM_DQM/certification/Collisions18/13TeV/PromptReco # /afs/cern.ch/cms/CAF/CMSCOMM/COMM_DQM/certification/Collisions18/13TeV/PileUp/pileup_latest.txt JSON = os.path.join( jsondir, "Cert_314472-325175_13TeV_PromptReco_Collisions18_JSON.txt") datasets = { 'A': (315252, 316995), 'B': (317080, 319310), 'C': (319337, 320065), 'D': (320673, 325175), } campaign = "Autumn18" samples = [ ( 'TT', "TTTo2L2Nu", ), ( 'TT', "TTToHadronic", ), ( 'TT', "TTToSemiLeptonic", ), ( 'DY', "DYJetsToLL_M-10to50", ), ( 'DY', "DYJetsToLL_M-50", ), ( 'DY', "DY1JetsToLL_M-50", ), ( 'DY', "DY2JetsToLL_M-50", ), ( 'DY', "DY3JetsToLL_M-50", ), ( 'DY', "DY4JetsToLL_M-50", ), #( 'WJ', "WJetsToLNu", ), ( 'WJ', "W1JetsToLNu", ), ( 'WJ', "W2JetsToLNu", ), ( 'WJ', "W3JetsToLNu", ), ( 'WJ', "W4JetsToLNu", ), ( 'ST', "ST_tW_top", ), ( 'ST', "ST_tW_antitop", ), ( 'ST', "ST_t-channel_top", ), ( 'ST', "ST_t-channel_antitop", ), #( 'ST', "ST_s-channel", ), ( 'VV', "WW", ), ( 'VV', "WZ", ), ( 'VV', "ZZ", ), ] # SAMPLES FILENAMES fname = "$PICODIR/$SAMPLE_$CHANNEL.root" if '$PICODIR' in fname: import TauFW.PicoProducer.tools.config as GLOB CONFIG = GLOB.getconfig(verb=verbosity) fname = repkey(fname, PICODIR=CONFIG['picodir']) for i, (group, sample) in enumerate(samples): fname = repkey(fname, ERA=era, GROUP=group, SAMPLE=sample, CHANNEL=channel) samples[i] = (sample, fname) if verbosity >= 1: print ">>> samples = %r" % (samples) # JSON jsons = {} if periods: outdir = ensuredir("json") for period in periods: start, end = getPeriodRunNumbers(period, datasets) erarun = "Run%d%s" % (era, period) jsonout = "json/" + re.sub(r"\d{6}-\d{6}", erarun, JSON.split('/')[-1]) filterJSONByRunNumberRange(JSON, jsonout, start, end, verb=verbosity) jsons[erarun] = jsonout else: jsons[era] = JSON # DATA datahists = {period: [] for period in jsons} if 'data' in types: for period, json in jsons.iteritems(): for minbias in minbiases: filename = "Data_PileUp_%s_%s.root" % ( period, str(minbias).replace('.', 'p')) datahist = getDataProfile(filename, json, pileup, 100, era, minbias) datahists[period].append((minbias, datahist)) elif args.plot: for era in jsons: for minbias in minbiases: filename = "Data_PileUp_%s_%s.root" % ( era, str(minbias).replace('.', 'p')) file, hist = gethist(filename, 'pileup', retfile=True) if not file or not hist: continue hist.SetDirectory(0) file.Close() datahists[era].append((minbias, hist)) # MC if 'mc' in types: mcfilename = "MC_PileUp_%s.root" % (era) #mcfilename = "MC_PileUp_%s_%s.root"%(era,campaign) getMCProfile(mcfilename, samples, channel, era) if args.plot: mchist = compareMCProfiles(samples, channel, era) for era in jsons: for minbias, datahist in datahists[era]: compareDataMCProfiles(datahist, mchist, era, minbias) deletehist(mchist) # clean memory if era == '2017': # also check new/old pmx separately mcfilename_bug = mcfilename.replace(".root", "_old_pmx.root") mcfilename_fix = mcfilename.replace(".root", "_new_pmx.root") getMCProfile(mcfilename_bug, samples_bug, channel, era) getMCProfile(mcfilename_fix, samples_fix, channel, era) if args.plot: mchist_bug = compareMCProfiles(samples_bug, channel, era, tag="old_pmx") mchist_fix = compareMCProfiles(samples_fix, channel, era, tag="new_pmx") for era in jsons: for minbias, datahist in datahists[era]: compareDataMCProfiles(datahist, mchist_bug, era, minbias, tag="old_pmx") compareDataMCProfiles(datahist, mchist_fix, era, minbias, tag="new_pmx") # FLAT if 'flat' in types: filename = "MC_PileUp_%d_FlatPU0to75.root" % era hist_flat = getFlatProfile(filename, 75) for era in jsons: for minbias, datahist in datahists[era]: compareDataMCProfiles(datahist, hist_flat, era, minbias, tag="FlatPU0to75", rmin=0.0, rmax=3.1)
def getsampleset(datasample, expsamples, sigsamples=[], **kwargs): """Create sample set from a table of data and MC samples.""" channel = kwargs.get('channel', "") era = kwargs.get('era', "") fpattern = kwargs.get( 'file', None) # file name pattern, e.g. $PICODIR/$SAMPLE_$CHANNEL$TAG.root weight = kwargs.pop('weight', "") # common weight for MC samples dataweight = kwargs.pop('dataweight', "") # weight for data samples url = kwargs.pop('url', "") # XRootD url tag = kwargs.pop('tag', "") # extra tag for file name if not fpattern: fpattern = "$PICODIR/$SAMPLE_$CHANNEL$TAG.root" if '$PICODIR' in fpattern: import TauFW.PicoProducer.tools.config as GLOB CONFIG = GLOB.getconfig(verb=0) picodir = CONFIG['picodir'] fpattern = repkey(fpattern, PICODIR=picodir) if url: fpattern = "%s/%s" % (fpattern, url) LOG.verb("getsampleset: fpattern=%r" % (fpattern), level=1) # MC (EXPECTED) for i, info in enumerate(expsamples[:]): expkwargs = kwargs.copy() expkwargs['weight'] = weight if len(info) == 4: group, name, title, xsec = info elif len(info) == 5 and isinstance(info[4], dict): group, name, title, xsec, newkwargs = info expkwargs.update(newkwargs) else: LOG.throw(IOError, "Did not recognize mc row %s" % (info)) fname = repkey(fpattern, ERA=era, GROUP=group, SAMPLE=name, CHANNEL=channel, TAG=tag) #print fname sample = MC(name, title, fname, xsec, **expkwargs) expsamples[i] = sample # DATA (OBSERVED) title = 'Observed' datakwargs = kwargs.copy() datakwargs['weight'] = dataweight if isinstance(datasample, dict) and channel: datasample = datasample[channel] if len(datasample) == 2: group, name = datasample elif len(datasample) == 3: group, name = datasample[:2] if isinstance(datasample[2], dict): # dictionary datakwargs.update(datasample[2]) else: # string title = datasample[2] elif len(datasample) == 4 and isinstance(datasample[3], dict): group, name, title, newkwargs = datasample datakwargs.update(newkwargs) else: LOG.throw(IOError, "Did not recognize data row %s" % (datasample)) fpattern = repkey(fpattern, ERA=era, GROUP=group, SAMPLE=name, CHANNEL=channel, TAG=tag) fnames = glob.glob(fpattern) #print fnames if len(fnames) == 1: datasample = Data(name, title, fnames[0]) elif len(fnames) > 1: namerexp = re.compile(name.replace('?', '.').replace('*', '.*')) name = name.replace('?', '').replace('*', '') datasample = MergedSample(name, 'Observed', data=True) for fname in fnames: setname = namerexp.findall(fname)[0] #print setname datasample.add(Data(setname, 'Observed', fname, **datakwargs)) else: LOG.throw(IOError, "Did not find data file %r" % (fpattern)) # SAMPLE SET sampleset = SampleSet(datasample, expsamples, sigsamples, **kwargs) return sampleset
def main(): eras = args.eras periods = cleanPeriods(args.periods) channel = args.channel types = args.types verbosity = args.verbosity minbiases = [ 69.2 ] if periods else [ 69.2, 69.2*1.046, 69.2*0.954, 80.0 ] fname_ = "$PICODIR/$SAMPLE_$CHANNEL.root" # sample file name if 'mc' in types and '$PICODIR' in fname_: import TauFW.PicoProducer.tools.config as GLOB CONFIG = GLOB.getconfig(verb=verbosity) fname_ = repkey(fname_,PICODIR=CONFIG['picodir']) for era in args.eras: year = getyear(era) mcfilename = "MC_PileUp_%s.root"%(era) jsondir = os.path.join(datadir,'json',str(year)) pileup = os.path.join(jsondir,"pileup_latest.txt") jname = getJSON(era) CMSStyle.setCMSEra(era) samples_bug = [ ] # buggy samples in (pre-UL) 2017 with "old pmx" library samples_fix = [ ] # fixed samples in (pre-UL) 2017 with "new pmx" library samples = [ # default set of samples ( 'DY', "DYJetsToMuTauh_M-50" ), ( 'DY', "DYJetsToLL_M-50" ), ( 'DY', "DY4JetsToLL_M-50" ), ( 'DY', "DY3JetsToLL_M-50" ), ( 'DY', "DY2JetsToLL_M-50" ), ( 'DY', "DY1JetsToLL_M-50" ), ( 'WJ', "WJetsToLNu" ), ( 'WJ', "W4JetsToLNu" ), ( 'WJ', "W3JetsToLNu" ), ( 'WJ', "W2JetsToLNu" ), ( 'WJ', "W1JetsToLNu" ), ( 'TT', "TTToHadronic" ), ( 'TT', "TTTo2L2Nu" ), ( 'TT', "TTToSemiLeptonic" ), ( 'ST', "ST_tW_top" ), ( 'ST', "ST_tW_antitop" ), ( 'ST', "ST_t-channel_top" ), ( 'ST', "ST_t-channel_antitop" ), ( 'VV', "WW" ), ( 'VV', "WZ" ), ( 'VV', "ZZ" ), ] if era=='2016': campaign = "Moriond17" if 'UL' in era and 'preVFP' in era: campaign = "Summer19" elif 'UL' in era: campaign = "Summer19" else: samples = [ ( 'TT', "TT", ), ( 'DY', "DYJetsToLL_M-10to50", ), ( 'DY', "DYJetsToLL_M-50", ), ( 'DY', "DY1JetsToLL_M-50", ), ( 'DY', "DY2JetsToLL_M-50", ), ( 'DY', "DY3JetsToLL_M-50", ), ( 'WJ', "WJetsToLNu", ), ( 'WJ', "W1JetsToLNu", ), ( 'WJ', "W2JetsToLNu", ), ( 'WJ', "W3JetsToLNu", ), ( 'WJ', "W4JetsToLNu", ), ( 'ST', "ST_tW_top", ), ( 'ST', "ST_tW_antitop", ), ( 'ST', "ST_t-channel_top", ), ( 'ST', "ST_t-channel_antitop", ), #( 'ST', "ST_s-channel", ), ( 'VV', "WW", ), ( 'VV', "WZ", ), ( 'VV', "ZZ", ), ] elif '2017' in era: if 'UL' in era: campaign = "Summer19" else: campaign = "Winter17_V2" samples_bug = [ # buggy samples in (pre-UL) 2017 ( 'DY', "DYJetsToLL_M-50", ), ( 'WJ', "W3JetsToLNu", ), ( 'VV', "WZ", ), ] samples_fix = [ # fixed samples in (pre-UL) 2017 ( 'DY', "DYJetsToLL_M-10to50", ), ( 'DY', "DY1JetsToLL_M-50", ), ( 'DY', "DY2JetsToLL_M-50", ), ( 'DY', "DY3JetsToLL_M-50", ), ( 'DY', "DY4JetsToLL_M-50", ), ( 'TT', "TTTo2L2Nu", ), ( 'TT', "TTToHadronic", ), ( 'TT', "TTToSemiLeptonic", ), ( 'WJ', "WJetsToLNu", ), ( 'WJ', "W1JetsToLNu", ), ( 'WJ', "W2JetsToLNu", ), ( 'WJ', "W4JetsToLNu", ), ( 'ST', "ST_tW_top", ), ( 'ST', "ST_tW_antitop", ), ( 'ST', "ST_t-channel_top", ), ( 'ST', "ST_t-channel_antitop", ), #( 'ST', "ST_s-channel", ), ( 'VV', "WW", ), ( 'VV', "ZZ", ), ] samples = samples_bug + samples_fix else: if 'UL' in era: campaign = "Summer19" else: campaign = "Autumn18" samples = [ ( 'TT', "TTTo2L2Nu", ), ( 'TT', "TTToHadronic", ), ( 'TT', "TTToSemiLeptonic", ), ( 'DY', "DYJetsToLL_M-10to50", ), ( 'DY', "DYJetsToLL_M-50", ), ( 'DY', "DY1JetsToLL_M-50", ), ( 'DY', "DY2JetsToLL_M-50", ), ( 'DY', "DY3JetsToLL_M-50", ), ( 'DY', "DY4JetsToLL_M-50", ), #( 'WJ', "WJetsToLNu", ), ( 'WJ', "W1JetsToLNu", ), ( 'WJ', "W2JetsToLNu", ), ( 'WJ', "W3JetsToLNu", ), ( 'WJ', "W4JetsToLNu", ), ( 'ST', "ST_tW_top", ), ( 'ST', "ST_tW_antitop", ), ( 'ST', "ST_t-channel_top", ), ( 'ST', "ST_t-channel_antitop", ), #( 'ST', "ST_s-channel", ), ( 'VV', "WW", ), ( 'VV', "WZ", ), ( 'VV', "ZZ", ), ] # SAMPLES FILENAMES samples_ = [ ] suberas = [era+"_preVFP",era+"_postVFP"] if era=='UL2016' else [era] for subera in suberas: for i, (group,sample) in enumerate(samples): fname = repkey(fname_,ERA=subera,GROUP=group,SAMPLE=sample,CHANNEL=channel) samples_.append((sample,fname)) samples = samples_ # replace sample list if verbosity>=1: print ">>> samples = %r"%(samples) # JSON jsons = { } if periods: for period in periods: jsonout = filterJSONByRunNumberRange(jname,era,period=period,outdir='json',verb=verbosity) jsons[erarun] = jsonout else: jsons[era] = jname # DATA datahists = { period: [ ] for period in jsons } if 'data' in types: for period, json in jsons.iteritems(): for minbias in minbiases: filename = "Data_PileUp_%s_%s.root"%(period,str(minbias).replace('.','p')) datahist = getDataProfile(filename,json,pileup,100,era,minbias) datahists[period].append((minbias,datahist)) elif args.plot: # do not create new data profiles, but just load them for era in jsons: for minbias in minbiases: filename = "Data_PileUp_%s_%s.root"%(era,str(minbias).replace('.','p')) file, hist = gethist(filename,'pileup',retfile=True) if not file or not hist: continue hist.SetDirectory(0) file.Close() datahists[era].append((minbias,hist)) # MC if 'mc' in types: assert samples, "compareMCProfiles: Did not find any samples for %r..."%(era) mcfilename = "MC_PileUp_%s.root"%(era) #mcfilename = "MC_PileUp_%s_%s.root"%(era,campaign) getMCProfile(mcfilename,samples,channel,era) if args.plot: mchist = compareMCProfiles(samples,channel,era) for era in jsons: for minbias, datahist in datahists[era]: compareDataMCProfiles(datahist,mchist,era,minbias) compareDataMCProfiles(datahists[era],mchist,era,rmin=0.4,rmax=1.5,delete=True) deletehist(mchist) # clean memory if era=='2017': #and 'UL' not in era # buggy (pre-UL) 2017: also check new/old pmx separately mcfilename_bug = mcfilename.replace(".root","_old_pmx.root") mcfilename_fix = mcfilename.replace(".root","_new_pmx.root") getMCProfile(mcfilename_bug,samples_bug,channel,era) getMCProfile(mcfilename_fix,samples_fix,channel,era) if args.plot: mchist_bug = compareMCProfiles(samples_bug,channel,era,tag="old_pmx") mchist_fix = compareMCProfiles(samples_fix,channel,era,tag="new_pmx") for era in jsons: for minbias, datahist in datahists[era]: compareDataMCProfiles(datahist,mchist_bug,era,minbias,tag="old_pmx") compareDataMCProfiles(datahist,mchist_fix,era,minbias,tag="new_pmx") # FLAT if 'flat' in types: filename = "MC_PileUp_%d_FlatPU0to75.root"%era hist_flat = getFlatProfile(filename,75) for era in jsons: for minbias, datahist in datahists[era]: compareDataMCProfiles(datahist,hist_flat,era,minbias,tag="FlatPU0to75",rmin=0.0,rmax=3.1)