def runQueeny( project, tmp=None ): try: # Fixes 2l4g return _runQueeny(project, tmp=tmp) except: nTerror("Queeny failed as per below.") nTtracebackError() return True
def lenRecursive(obj, max_depth = 5): """Count the number of values recursively. Walk thru any children elements that are also of type dict {a:{b:None, c:None} will give a length of 2 """ if not isinstance(obj, (list, tuple, dict)): nTerror("In lenRecursive the input was not a dict or list instance but was a %s" % str(obj)) return None count = 0 eList = obj if isinstance(obj, dict): eList = obj.values() for element in eList: if element == None: count += 1 continue if isinstance(element, (list, tuple, dict)): new_depth = max_depth - 1 if new_depth < 0: count += 1 # still count but do not go to infinity and beyond continue count += lenRecursive(element, new_depth) continue count += 1 # end for return count
def lenRecursive(obj, max_depth=5): """Count the number of values recursively. Walk thru any children elements that are also of type dict {a:{b:None, c:None} will give a length of 2 """ if not isinstance(obj, (list, tuple, dict)): nTerror( "In lenRecursive the input was not a dict or list instance but was a %s" % str(obj)) return None count = 0 eList = obj if isinstance(obj, dict): eList = obj.values() for element in eList: if element == None: count += 1 continue if isinstance(element, (list, tuple, dict)): new_depth = max_depth - 1 if new_depth < 0: count += 1 # still count but do not go to infinity and beyond continue count += lenRecursive(element, new_depth) continue count += 1 # end for return count
def _findTalosOutputFiles( path, talosDefs ): """ Check for existence of the output files; return True on error """ # pred.tab file; curently only one name encountered talosDefs.predFile = None for tFile in 'pred.tab'.split(): pFile = path / tFile if pFile.exists(): #print '>found>', pFile talosDefs.predFile = tFile # only store local part of name break #end for if talosDefs.predFile == None: nTerror("_findTalosOutputFiles: Failed to find pred.tab file") return True #end if # Multiple predSS names found talosDefs.predSSFile = None for tFile in 'predSS.tab pred.ss.tab'.split(): pFile = path / tFile if pFile.exists(): #print '>found>', pFile talosDefs.predSSFile = tFile # only store local part of name break #end for if talosDefs.predSSFile == None: nTerror("_findTalosOutputFiles: Failed to find predSS.tab or pred.ss.tab file") return True #end if return False
def initPdb(self): fName = self.pdbEntry.get() if not os.path.exists(fName): nTerror('Error: file "%s" does not exist\n', fName) #end if self.project = cing.Project.open(self.nameEntry.get(), status='new') self.project.initPDB(pdbFile=fName, convention='PDB')
def _averageShiftx(project): """Average shiftx data array for each atom return True on error """ # nTdebug('shiftx: doing averageShiftx') molecule = project.molecule if molecule is None: nTerror("_averageShiftx: no molecule defined") return True # end if for atm in molecule.allAtoms(): # Set averages shiftx = project.validationData.getResult(atm, constants.SHIFTX_KEY) if shiftx is not None: av, sd, n = nTaverage(shiftx.data) if av is None: shiftx[ShiftxResult.AVERAGE] = NaN shiftx[ShiftxResult.SD] = NaN # LEGACY: atm.shiftx.av = NaN atm.shiftx.sd = NaN else: shiftx[ShiftxResult.AVERAGE] = av shiftx[ShiftxResult.SD] = sd # LEGACY: atm.shiftx.av = av atm.shiftx.sd = sd
def exportDef(self, stream=sys.stdout, convention=constants.INTERNAL): """export definitions to stream""" io.printf(stream, "\t#---------------------------------------------------------------\n") io.printf(stream, "\tDIHEDRAL %-8s\n", self.name) io.printf(stream, "\t#---------------------------------------------------------------\n") if convention == constants.INTERNAL: atms = self.atoms else: # convert atoms atms = [] for resId, atmName in self.atoms: if resId != 0: nTwarning("DihedralDef.exportDef: %s topology (%d,%s) skipped translation", self, resId, atmName) atms.append((resId, atmName)) elif not atmName in self.residueDef: nTerror("DihedralDef.exportDef: %s topology (%d,%s) not decoded", self, resId, atmName) atms.append((resId, atmName)) else: atm = self.residueDef[atmName] atms.append((resId, atm.translate(convention))) # end if # end for # print 'atms', atms # end if io.printf(stream, "\t\t%s = %s\n", "atoms", repr(atms)) for attr in ["karplus"]: io.printf(stream, "\t\t%s = %s\n", attr, repr(self[attr])) # end for io.printf(stream, "\tEND_DIHEDRAL\n")
def parseShiftx(project, tmp=None): """ Parse the output generated by the shiftx program """ if project is None: nTmessage("parseShiftx: No project defined") return True if project.molecule is None: nTmessage("parseShiftx: No molecule defined") return True defs = project.getStatusDict(constants.SHIFTX_KEY, **shiftxStatus()) if not defs.completed: nTmessage("parseShiftx: No shiftx was run") return True path = project.validationPath(defs.directory) if not path: nTerror('parseShiftx: directory "%s" with shiftx data not found', path) return True _resetShiftx(project) # print '>>', defs, len(defs.chains) for chainId, fname in defs.chains: if _parseShiftxOutput(path / fname, project, chainId): return True # end for defs.parsed = True _calculatePseudoAtomShifts(project, len(defs.models)) _averageShiftx(project) calcQshift(project) return False
def getArchiveIdFromDirectoryName(dirName): ''' From input such as: Return a valid id such as: or None on error. ''' nTdebug("In `getArchiveIdFromDirectoryName`, with %s as dirName" % dirName) if not dirName: nTerror( "Failed to map dirName [%s] because baseName evaluates to False." % dirName) return None # end if baseName = None for baseTry in results_baseList: nTdebug("baseTry: %s" % baseTry) if baseTry in dirName: baseName = baseTry break # end if # end def nTdebug("Returning %s as archiveID" % mapBase2Archive[baseName]) if not baseName in mapBase2Archive.keys(): nTwarning( "Failed to map dirName [%s] with baseName [%s] because baseName is an unenumerated baseName." % (dirName, baseName)) return None # end if return mapBase2Archive[baseName]
def getArchiveIdFromDirectoryName(dirName): ''' From input such as: Return a valid id such as: or None on error. ''' nTdebug("In `getArchiveIdFromDirectoryName`, with %s as dirName" % dirName) if not dirName: nTerror("Failed to map dirName [%s] because baseName evaluates to False." % dirName) return None # end if baseName = None for baseTry in results_baseList: nTdebug("baseTry: %s" % baseTry) if baseTry in dirName: baseName = baseTry break # end if # end def nTdebug("Returning %s as archiveID" % mapBase2Archive[baseName]) if not baseName in mapBase2Archive.keys(): nTwarning("Failed to map dirName [%s] with baseName [%s] because baseName is an unenumerated baseName." % (dirName, baseName)) return None # end if return mapBase2Archive[baseName]
def initPdb(self ): fName = self.pdbEntry.get() if not os.path.exists( fName ): nTerror('Error: file "%s" does not exist\n', fName) #end if self.project = cing.Project.open( self.nameEntry.get(), status='new' ) self.project.initPDB( pdbFile=fName, convention = 'PDB' )
def showColumn( self, *cNames ): """ Show column(s) cNames """ for c in cNames: if not c in self: nTerror('NmrPipeTable.showColumn: column "%s" not defined\n', c) else: self[c].hide = False
def hideColumn( self, *cNames ): """ Hide column(s) cNames """ for c in cNames: if not c in self: nTerror('NmrPipeTable.hideColumn: column "%s" not defined\n', c) else: self[c].hide = True
def isValidAtomName(self, atmName, convention=constants.INTERNAL): """return True if resName, atmName is a valid for convention, False otherwise""" # print '>>', resName, atomName if not self.residueDict.has_key(convention): nTerror("ResidueDef.isValidAtomName: convention %s not defined within CING", convention) return False # end if return self.getAtomDefByName(atmName, convention=convention) is not None
def getId(self, id): """Return restraint instance with id Returns None on error """ if not self._idDict.has_key(id): nTerror('ResonanceList.getId: invalid id (%d)', id) return None #end if return self._idDict[id]
def openOldProject(self ): fName = self.projEntry.get() if not os.path.exists( fName ): nTerror('Error: file "%s" does not exist\n', fName) #end if if self.project: self.closeProject() # end if self.project = cing.Project.open( name=fName, status='old', verbose=False )
def restoreFromSML(rootPath, mDef, convention=constants.INTERNAL): """ restore ResidueDefs from SML files in rootPath to a MolDef instance mDef """ path = disk.Path(str(rootPath)) if not path.exists(): nTerror('restoreFromSML: path "%s" not found', path) return None # end if for rfile in path.glob("*.sml"): # nTdebug('restoreSML: restoring from "%s"', rfile) mDef.appendResidueDefFromSMLfile(rfile)
def openOldProject(self): fName = self.projEntry.get() if not os.path.exists(fName): nTerror('Error: file "%s" does not exist\n', fName) #end if if self.project: self.closeProject() # end if self.project = cing.Project.open(name=fName, status='old', verbose=False)
def open(self, path, status='old'): """ Open a project and append return project or None on error """ project = Project.open( path, status=status) if not project: nTerror('Projects.open: aborting') sys.exit(1) self.append( project ) return project
def writeFile( self, tabFile) : """ Write table to tabFile. Return True on error """ fp = open( tabFile, 'w' ) if fp is None: nTerror('NmrPipeTable.writeFile: error opening "%s"', tabFile) return True self.write( fp ) fp.close() # nTdebug('==> Written nmrPipe table file "%s"', tabFile ) return False
def getDiagonal(self): """ Get the diagonal of a square NTlistOfLists return NTlist instance or None on error """ if self.rowSize != self.colSize: nTerror('NTlistOflists.getDiagonal: unequal number of rows (%d) and collumns (%d)', self.rowSize, self.colSize) return None result = NTlist() for i in range(self.rowSize): result.append(self[i][i]) #end for return result
def test( projects, stream=sys.stdout ): # A hack to get residue specific results selectedResidues = projects[0].molecule.setResiduesFromRanges('all') for res in selectedResidues: rmsds3 = calcPhiPsiRmsds( projects, ranges=[res.resNum] ) printRmsds('Relative Phi-Psi '+res.name, rmsds3, stream ) res.phipsiRmsds = rmsds3 for p in projects.entries[1:]: val = getDeepByKeysOrAttributes(projects, 'moleculeMap', res, p.name) if val == None: nTerror('Setting phipsiRmsds residue %s project %s (mapping not found)', res.name, p) continue val.phipsiRmsds = rmsds3
def getDiagonal(self): """ Get the diagonal of a square NTlistOfLists return NTlist instance or None on error """ if self.rowSize != self.colSize: nTerror( 'NTlistOflists.getDiagonal: unequal number of rows (%d) and collumns (%d)', self.rowSize, self.colSize) return None result = NTlist() for i in range(self.rowSize): result.append(self[i][i]) #end for return result
def exportDef(self, stream=sys.stdout, convention=constants.INTERNAL): """export definitions to stream""" io.printf(stream, "\t#---------------------------------------------------------------\n") io.printf(stream, "\tATOM %-8s\n", self.translate(convention)) io.printf(stream, "\t#---------------------------------------------------------------\n") # Topology; optionally convert if convention == constants.INTERNAL: top2 = self.topology else: # convert topology top2 = [] for resId, atmName in self.topology: if resId != 0: nTwarning("AtomDef.exportDef: %s topology (%d,%s) skipped translation", self, resId, atmName) top2.append((resId, atmName)) elif not atmName in self.residueDef: nTerror("AtomDef.exportDef: %s topology (%d,%s) not decoded", self, resId, atmName) top2.append((resId, atmName)) else: atm = self.residueDef[atmName] top2.append((resId, atm.translate(convention))) # end if # end for # print 'top2', top2 # end if io.printf(stream, "\t\t%s = %s\n", "topology", repr(top2)) # clean the properties list props = [] for prop in self.properties: # Do not store name and residueDef.name as property. Add those dynamically upon reading if ( not prop in [self.name, self.residueDef.name, self.residueDef.shortName, self.spinType] and not prop in props ): props.append(prop) # end if # end for io.printf(stream, "\t\t%s = %s\n", "properties", repr(props)) # Others for attr in ["nameDict", "aliases", "pseudo", "real", "type", "spinType", "shift", "hetatm"]: if self.has_key(attr): io.printf(stream, "\t\t%s = %s\n", attr, repr(self[attr])) # end for io.printf(stream, "\tEND_ATOM\n")
def filterListByObjectClassName(myList, className): 'Return new list with only those objects that have given class name.' result = [] if myList == None: return result if not isinstance(myList, list): nTerror('Input is not a list but a %s' % str(myList)) return result # if len(myList) == 0: # return result for obj in myList: oClassName = getDeepByKeysOrAttributes(obj, '__class__', '__name__') # nTdebug("oClassName: %s" % oClassName) if oClassName == className: result.append(obj) return result
def filterListByObjectClassName( myList, className ): 'Return new list with only those objects that have given class name.' result = [] if myList == None: return result if not isinstance(myList, list): nTerror('Input is not a list but a %s' % str(myList)) return result # if len(myList) == 0: # return result for obj in myList: oClassName = getDeepByKeysOrAttributes(obj, '__class__', '__name__' ) # nTdebug("oClassName: %s" % oClassName) if oClassName == className: result.append(obj) return result
def nTflatten(obj): 'Returns a tuple instead of the more commonly used NTlist or straight up list because this is going to be used for formatted printing.' if not isinstance(obj, (list, tuple)): nTerror("Object is not a list or tuple: %s", obj) return None result = [] for element in obj: if isinstance(element, (list, tuple)): elementFlattened = nTflatten(element) if not isinstance(elementFlattened, (list, tuple)): nTerror("ElementFlattened is not a list or tuple: %s", obj) return None result += elementFlattened else: result.append(element) # end if # end for return tuple(result)
def nTflatten(obj): 'Returns a tuple instead of the more commonly used NTlist or straight up list because this is going to be used for formatted printing.' if not isinstance(obj, (list, tuple)): nTerror("Object is not a list or tuple: %s", obj) return None result =[] for element in obj: if isinstance(element, (list, tuple)): elementFlattened = nTflatten(element) if not isinstance(elementFlattened, (list, tuple)): nTerror("ElementFlattened is not a list or tuple: %s", obj) return None result += elementFlattened else: result.append(element) # end if # end for return tuple( result )
def _mapIt(self, p1, objects, p2): for c1 in objects: # find the corresponding object in p2 ctuple1 = c1.nameTuple() ctuple2 = list(ctuple1) ctuple2[0] = p2.molecule.name ctuple2 = tuple(ctuple2) c2 = p2.decodeNameTuple(ctuple2) if c2==None: nTerror('Projects._mapIt: error mapping %s to %s (derived from %s)', ctuple2, p2, p1) self.moleculeMap.setdefault(c1, NTdict()) self.moleculeMap[c1][(p1.name,p1.molecule.name)] = c1 self.moleculeMap[c1][(p2.name,p2.molecule.name)] = c2 if c2 != None: self.moleculeMap.setdefault(c2, NTdict()) self.moleculeMap[c2][(p1.name,p1.molecule.name)] = c1 self.moleculeMap[c2][(p2.name,p2.molecule.name)] = c2
def rename(self, newName): 'Please use this rename instead of directly renaming so BMRB ID detection can kick in.' self.name = newName # Detect the id from strings like: bmr4020_21.str pattern = re.compile( '^.*(bmr\d+).*$' ) match = pattern.match( self.name ) if match: bmrb_idStr = match.group(1)[3:] self.bmrb_id = int(bmrb_idStr) if is_bmrb_code(self.bmrb_id): # nTdebug("-0- Autodetected BMRB ID %s from new name: %s" % (self.bmrb_id, self.name)) return self # end if nTerror("Did not detect valid BMRB ID from new name: %s." % self.name) return self # end if # nTdebug("-2- No BMRB ID was matched from new name: %s" % self.name) # return self.projectList.rename(self.name, newName) return self
def renameToXplorCompatible(self): """rename to Xplor Compatible""" n = len(self.name) if n < constants.MAX_SIZE_XPLOR_RESTRAINT_LIST_NAME: # nTdebug("Kept the original xplor compatible drl name: %s" % self.name) return prefix = 'pl' if self.__CLASS__ == constants.DRL_LEVEL: prefix = constants.DRL_STR elif self.__CLASS__ == constants.ACL_LEVEL: prefix = constants.ACL_STR elif self.__CLASS__ == constants.RDCL_LEVEL: prefix = constants.RDCL_STR prefix += '_' newName = self.projectList.getNextValidName(prefix = prefix) if newName == None: nTerror("Failed renameToXplorCompatible for %s" % self) return self.rename(newName)
def appendResidueDef(self, name, **kwds): """ Append a new ResidueDef instance name Return instance or None on error """ resDef = ResidueDef(name, **kwds) if self.has_key(name): oldResDef = self[name] if not oldResDef.canBeModified: nTerror('MolDef.appendResidueDef: replacing residueDef "%s" not allowed', oldResDef) return None # end if # nTdebug('MolDef.appendResidueDef: replacing residueDef "%s"', oldResDef) self.replaceChild(oldResDef, resDef) else: self.addChild2(resDef) # end if resDef.molDef = self resDef.postProcess() return resDef
def _importTableFile( tabFile, molecule ): """import a tabFile, match to residue instances of molecule Return the NmrPipeTable instance or None on error """ if not os.path.exists( tabFile ): nTerror('_importTableFile: table file "%s" not found', tabFile) return None if molecule==None: nTerror('_importTableFile: no molecule defined') return None # residues for which we will analyze; same as used in export2talosPlus residues = molecule.residuesWithProperties('protein') if not residues: nTerror('_importTableFile: no amino acid defined') return None table = NmrPipeTable() table.readFile(tabFile) for row in table: # find the residue row.residue = None if row.RESID > len(residues): nTerror('_importTableFile: invalid RESID %d', row.RESID) continue # map back onto CING res = residues[row.RESID-1] # RESID started at 1 if res.db.shortName != row.RESNAME.upper(): # also allow for the 'c' nTerror('_importTableFile: invalid RESNAME %s and CING %s', row.RESNAME, res) continue row.residue = res #print res, row #end for return table
def save(self, path = None): """ Create a SML file Return self or None on error Sort the list on id before saving, to preserve (original) order from save to restore. """ # sort the list on id number NTsort( self, byItem='id', inplace=True) if not path: path = self.objectPath if self.SMLhandler.toFile(self, path) != self: nTerror('%s.save: failed creating "%s"' % (self.__CLASS__, path)) return None #end if # restore original sorting if self._byItem: NTsort( self, byItem=self._byItem, inplace=True) nTdetail('==> Saved %s to "%s"', self, path) return self
def addColumn( self, name, fmt = "%s", default=None ): """ Add column 'name' to table; set values to 'default' return columnDef, or None on error """ if name in self: nTerror('NmrPipeTable.addColumn: column "%s" already exists\n', name ) return None #end if col = NTdict( name=name, fmt=fmt, id=len(self.columnDefs), hide=False, __FORMAT__ = '%(name)s' ) self.columnDefs.append( col ) self[name] = col for row in self: row[name] = default #end for return col
def _runQueeny( project, tmp=None ): """Perform a queeny analysis and save the results. Returns True on error. Returns False when all is fine. """ nTmessage("==> Calculating restraint information by Queeny") if project is None: nTerror("runQueeny: No project defined") return True if project.molecule is None: nTerror("runQueeny: No molecule defined") return True if len(project.distances) == 0: nTmessage("==> runQueeny: No distance restraints defined.") return True queenyDefs = project.getStatusDict(constants.QUEENY_KEY, **queenyDefaults()) queenyDefs.molecule = project.molecule.asPid path = project.validationPath( queenyDefs.directory ) if not path: nTmessage("==> runQueeny: error creating '%s'", path) return True q = Queeny( project ) q.execute() queenyDefs.date = io.now() queenyDefs.completed = True queenyDefs.parsed = True queenyDefs.version = __version__ del(q) return False
def restoreShiftx100(project): """ Restore shiftx results by parsing files. Return True on error """ if project is None: nTdebug("restoreShiftx100: No project defined") return True if project.molecule == None: return True # Gracefully returns defs = project.getStatusDict(constants.SHIFTX_KEY) # Older versions; initialize the required keys of shiftx Status from xml file if project.version < 0.881: path = project.validationPath(cdefs.validationsDirectories.shiftx) if not path: nTerror('restoreShiftx100: directory "%s" with shiftx data not found', path) return True xmlFile = project.path() / 'content.xml' if not xmlFile.exists(): nTerror('restoreShiftx100: Shiftx results xmlFile "%s" not found', xmlFile) return True #end if shiftxResult = xmlTools.xML2obj(xmlFile) if not shiftxResult: nTerror('restoreShiftx100: restoring Shiftx results from xmlFile "%s" failed', xmlFile) return None defs.update(shiftxResult) defs.completed = True #end if #update some of the settings if 'moleculeName' in defs: del defs['moleculeName'] if 'path' in defs: defs.directory = disk.Path(defs.path)[-1:] del defs['path'] else: defs.directory = constants.SHIFTX_KEY if 'contenFile' in defs: del defs['contentFile'] if not defs.completed: nTdebug('restoreShiftx100: shiftx not completed') return True return project.parseShiftx()
def nTtracebackError(): traceBackString = format_exc() # print 'DEBUG: nTtracebackError: [%s]' % traceBackString if traceBackString == None: traceBackString = 'No traceback error string available.' nTerror(traceBackString)
def getRevDateCingLog(fileName): """Return int revision and date or None on error.""" txt = readTextFromFile(fileName) if txt == None: nTerror("In %s failed to find %s" % (getCallerName(), fileName)) return None # Parse ##====================================================================================================== ##| CING: Common Interface for NMR structure Generation version 0.95 (r972) AW,JFD,GWV 2004-2011 | ##====================================================================================================== #User: i on: vc (linux/32bit/8cores/2.6.4) at: (10370) Sat Apr 16 14:24:12 2011 txtLineList = txt.splitlines() if len(txtLineList) < 2: nTerror("In %s failed to find at least two lines in %s" % (getCallerName(), fileName)) return None txtLine = txtLineList[1] reMatch = re.compile('^.+\(r(\d+)\)') # The number between brackets. searchObj = reMatch.search(txtLine) if not searchObj: nTerror( "In %s failed to find a regular expression match for the revision number in line %s" % (getCallerName(), txtLine)) return None rev = int(searchObj.group(1)) if len(txtLineList) < 4: nTerror("In %s failed to find at least four lines in %s" % (getCallerName(), fileName)) return None txtLine = txtLineList[3] reMatch = re.compile( '^.+\(\d+\) (.+)$' ) # The 24 character standard notation from time.asctime() searchObj = reMatch.search(txtLine) if not searchObj: nTerror( "In %s failed to find a regular expression match for the start timestamp in line %s" % (getCallerName(), txtLine)) return None tsStr = searchObj.group(1) # Sat Apr 16 14:24:12 2011 try: # struct_timeObject = time.strptime(tsStr) dt = datetime.datetime(*(time.strptime(tsStr)[0:6])) # dt = datetime.datetime.strptime(tsStr) except: nTtracebackError() nTerror("Failed to parse datetime from: %s" % tsStr) return None return rev, dt