def fetch_dutyondata2lite(): zhutis = [['放假', 'holiday'], ['请假', 'leave'], ['打卡', 'checkin'], ['入职', 'dutyon'], ['高温', 'hot'], ['下雨', 'rain']] try: for zhuti in zhutis: dfresult = chuliholidayleave_note(zhuti) if zhuti[0] in ['高温', '下雨']: if zhuti[0] == '高温': dffromgd, dfallfromgd = getgaowenfromgoogledrive() dfresult = dfresult.append(dffromgd) dfresult.drop_duplicates(['hottime'], inplace=True) else: dffromgd, dfallfromgd = getrainfromgoogledrive() dfresult = dfresult.append(dffromgd) dfresult.drop_duplicates(['raintime'], inplace=True) countfromini = getcfpoptionvalue('everworkplan', '行政管理', f'{zhuti[0]}count') if not countfromini: countfromini = 0 if countfromini == dfresult.shape[0]: log.info(f"本轮查询没有发现新的《{zhuti[0]}》相关数据,跳过!") continue cnxp = lite.connect(dbpathworkplan) # index, ['mingmu', 'xingzhi', 'tianshu', 'date'] dfresult.to_sql(zhuti[1], cnxp, if_exists='replace', index=None) cnxp.close() log.info(f'{zhuti[0]}数据表更新了{dfresult.shape[0]}条记录。') setcfpoptionvalue('everworkplan', '行政管理', f'{zhuti[0]}count', f"{dfresult.shape[0]}") except OSError as exp: topic = [x for [x, *y] in zhutis] log.critical(f'从evernote获取{topic}笔记信息时出现未名错误。{exp}')
def getgroupdf(dfs, xiangmus, period='month'): # global log # 日期唯一,就是求个框架,值其实随意,这里随意取了当天的sum(对数值有效) dfmobans = dfs.groupby('日期')[xiangmus].sum() dfout = pd.DataFrame() for xiangmu in xiangmus: if xiangmu in list(dfmobans.columns): dfmoban = dfmobans[xiangmu] else: log.info(str(set(dfs['品牌'])) + xiangmu + '无数据') continue dfmoban = dfmoban.dropna() # 去除空值避免干扰 if dfmoban.shape[0] == 0: # 无有效数据则轮空,不循环 continue dates = pd.date_range( dfmoban.index.min(), periods=(dfmoban.index.max() - dfmoban.index.min()).days + 1, freq='D') dfman = dfmoban.reindex(dates) for ix in dfman.index: if period == 'year': yuandiandate = pd.to_datetime( '%4d-01-01' % ix.year) # MonthEnd()好坑,处理不好月头月尾的数据 # yuandiandate = ix + YearBegin(-1) # MonthEnd()好坑,处理不好月头月尾的数据 else: yuandiandate = pd.to_datetime('%4d-%2d-01' % (ix.year, ix.month)) # yuandiandate = ix + MonthBegin(-1) dftmp = ((dfs[(dfs.日期 >= yuandiandate) & (dfs.日期 <= ix) & (dfs[xiangmu].isnull().values == False)]).groupby('客户编码'))[xiangmu].count() dfman[ix] = dftmp.shape[0] dfout = dfout.join(pd.DataFrame(dfman), how='outer') return dfout
def jilugooglefile(filepath): filelist = [ ff for ff in listdir(str(filepath)) if isfile(str(filepath / ff)) ] print(filelist) dfout = None for i in range(len(filelist)): df = pd.read_excel(str(filepath / filelist[i]), sheet_name='工作表1', header=None, index_col=0, parse_dates=True) if df.shape[0] == 0: log.info('%s 无进出记录' % filelist[i]) continue # descdb(df) df['shuxing'] = df.iloc[:, 1].apply(lambda x: x.split(' ')[0]) df['didian'] = df.iloc[:, 1].apply(lambda x: x.split(' ')[1]) df['entered'] = df.iloc[:, 0].apply(lambda x: True if x == 'entered' else False) dff = df.iloc[:, [6, 4, 5]] dff.columns = ['entered', 'shuxing', 'address'] # descdb(dff) if i == 0: dfout = dff else: dfout = dfout.append(dff).sort_index() return dfout
def getnotelist(name, wcpath, notebookguid): """ 根据传入的微信账号名称获得云端记录笔记列表 """ notelisttitle = f"微信账号({name})记录笔记列表" loginstr = "" if (whoami := execcmd("whoami")) and ( len(whoami) == 0) else f",登录用户:{whoami}" timenowstr = pd.to_datetime(datetime.now()).strftime("%F %T") if (notelistguid := getcfpoptionvalue('everwcitems', "common", f"{name}_notelist_guid")) is None: findnotelst = findnotefromnotebook(notebookguid, notelisttitle, notecount=100) if len(findnotelst) == 1: notelistguid = findnotelst[0][0] log.info(f"文件列表《{notelisttitle}》的笔记已经存在,取用") else: nrlst = list() nrlst.append(f"账号\t{name}\n笔记数量\t-1") # 初始化内容头部,和正常内容头部格式保持一致 nrlst.append("") nrlst.append( f"\n本笔记创建于{timenowstr},来自于主机:{getdevicename()}{loginstr}") note_body_str = '\n---\n'.join(nrlst) note_body = f"<pre>{note_body_str}</pre>" notelistguid = makenote2(notelisttitle, notebody=note_body, parentnotebookguid=notebookguid).guid log.info(f"文件列表《{notelisttitle}》被首次创建!") setcfpoptionvalue('everwcitems', "common", f'{name}_notelist_guid', str(notelistguid))
def with_logging(*args, **kwargs): if getinivaluefromnote('everwork', 'logdetails'): log.info(f'{func.__name__}函数被调用,参数列表:{args}') else: print(f'{func.__name__}函数被调用,参数列表:{args}') return func(*args, **kwargs)
def updatewcitemsxlsx2note(name, dftest, wcpath, notebookguid): """ 处理从本地资源文件读取生成的df,如果和ini登记数量相同,则返回;如果不同,则从笔记端读取相应登记 数量再次对比,相同,则跳过,如果不同,则拉取笔记资源文件和本地资源文件融合,更新笔记端资源文件并 更新ini登记数量(用融合后的记录数量) """ ny = dftest['time'].iloc[0].strftime("%y%m") dftfilename = f"wcitems_{name}_{ny}.xlsx" dftallpath = wcpath / dftfilename dftallpathabs = os.path.abspath(dftallpath) print(dftallpathabs) loginstr = "" if (whoami := execcmd("whoami")) and ( len(whoami) == 0) else f",登录用户:{whoami}" timenowstr = pd.to_datetime(datetime.now()).strftime("%F %T") if (dftfileguid := getcfpoptionvalue('everwcitems', dftfilename, 'guid')) is None: findnotelst = findnotefromnotebook(notebookguid, dftfilename, notecount=1) if len(findnotelst) == 1: dftfileguid = findnotelst[0][0] log.info(f"数据文件《{dftfilename}》的笔记已经存在,取用") else: first_note_desc = f"账号\t{None}\n记录数量\t-1" # 初始化内容头部,和正常内容头部格式保持一致 first_note_body = f"<pre>{first_note_desc}\n---\n\n本笔记创建于{timenowstr}," \ f"来自于主机:{getdevicename()}{loginstr}</pre>" dftfileguid = makenote2(dftfilename, notebody=first_note_body, parentnotebookguid=notebookguid).guid setcfpoptionvalue('everwcitems', dftfilename, 'guid', str(dftfileguid))
def alipay2note(): cnxp = lite.connect(dbpathdingdanmingxi) pathalipay = dirmainpath / 'data' / 'finance' / 'alipay' dfall = chulidataindir(cnxp, 'alipay', '支付宝流水', '2088802968197536', '支付宝', pathalipay, chulixls_zhifubao) zhds = fenliu2note(dfall) cnxp.close() financesection = '财务流水账' item = '支付宝白晔峰流水条目' cfpzysm, inizysmpath = getcfp('everzysm') if not cfpzysm.has_option(financesection, item): count = 0 else: count = cfpzysm.getint(financesection, item) if count == zhds.shape[0]: log.info(f'{item}\t{zhds.shape[0]}\t无内容更新。') return zhds else: log.info(f'{item}\t{zhds.shape[0]}\t内容有更新。') nowstr = datetime.datetime.now().strftime('%F %T') imglist2note( get_notestore(), [], 'f5bad0ca-d7e4-4148-99ac-d3472f1c8d80', f'支付宝白晔峰流水({nowstr})', tablehtml2evernote(zhds, tabeltitle='支付宝白晔峰流水', withindex=False)) cfpzysm.set(financesection, item, f'{zhds.shape[0]}') cfpzysm.write(open(inizysmpath, 'w', encoding='utf-8')) return zhds
def isitchat(pklabpath): """ 判断itchat是否已经运行,没有则热启动之。 如果成功则返回True,否则直接退出运行。 """ inputpklpath = os.path.abspath(pklabpath) # print(inputpklpath) if itchat.originInstance.alive: # 转换成绝对路径方便对比 loginpklpath = os.path.abspath(itchat.originInstance.hotReloadDir) if inputpklpath == loginpklpath: log.info(f"微信处于正常登录状态,pkl路径为:\t{loginpklpath}……") else: logstr = f"当前登录的pkl路径为{loginpklpath},不同于传入的参数路径:\t{inputpklpath}" log.critical(logstr) sys.exit(1) else: itchat.auto_login(hotReload=True, statusStorageDir=pklabpath) #热启动你的微信 if not itchat.originInstance.alive: log.critical("微信未能热启动,仍处于未登陆状态,退出!") sys.exit(1) else: loginpklpath = os.path.abspath(itchat.originInstance.hotReloadDir) logstr = f"微信热启动成功\t{loginpklpath}" log.info(logstr) return True
def chulidataindir_orderdetails(pathorder: Path): notestr = '订单明细' cnxp = lite.connect(dbpathdingdanmingxi) tablename_order = 'orderdetails' sqlstr = "select count(*) from sqlite_master where type='table' and name = '%s'" % tablename_order tablexists = pd.read_sql_query(sqlstr, cnxp).iloc[0, 0] > 0 if tablexists: # dfresult = pd.DataFrame() dfresult = pd.read_sql('select * from \'%s\'' % tablename_order, cnxp, parse_dates=['日期']) log.info('%s数据表%s已存在, 从中读取%d条数据记录。' % (notestr, tablename_order, dfresult.shape[0])) else: log.info('%s数据表%s不存在,将创建之。' % (notestr, tablename_order)) dfresult = pd.DataFrame() files = os.listdir(str(pathorder)) for fname in files: if fname.startswith(notestr) and (fname.endswith('xls') or fname.endswith('xlsx')): yichulifilelist = list() if (yichulifile := getcfpoptionvalue('everzysm', notestr, '已处理文件清单')): yichulifilelist = yichulifile.split() if fname in yichulifilelist: continue print(fname, end='\t') dffname = chulixls_orderdetails(pathorder / fname) if dffname is None: continue dfresult = dfresult.append(dffname) print(dffname.shape[0], end='\t') print(dfresult.shape[0]) yichulifilelist.append(fname) setcfpoptionvalue('everzysm', notestr, '已处理文件清单', '%s' % '\n'.join(yichulifilelist))
def termux_sms_send(msg='hi'): cmdlist = ['termux-sms-send', '-n', '15387182166', f'{msg}'] out, rc, err = utils.execute(cmdlist) if rc: log.Warning(f"发送短信时出现错误:{msg}") raise Exception(err) else: log.info(f"成功发送短信。") return out
def readfromtxt(fn): if not os.path.exists(fn): newfile = open(fn, 'w', encoding='utf-8') newfile.close() with open(fn, 'r', encoding='utf-8') as fff: itemsr = [line.strip() for line in fff if len(line.strip()) > 0] # for line in f: # print(line) log.info("《%s-%s》现有%d条记录。" % (mingmu, fenleistr, len(itemsr))) return itemsr
def fetchmjurlfromfile(ownername): """ fetch all zhanji urls from chatitems files """ ownpy = Pinyin().get_pinyin(ownername, '') datapath = getdirmain() / 'data' / 'webchat' datafilelist = os.listdir(datapath) print(datapath) resultlst = list() for filenameinner in datafilelist: if not filenameinner.startswith('chatitems'): continue filename = datapath / filenameinner rstlst = [] # 应对文本文件编码不同的情况 decode_set = [ 'utf-8', 'gb18030', 'ISO-8859-2', 'gb2312', 'gbk', 'Error' ] for dk in decode_set: try: with open(filename, "r", encoding=dk) as f: filelines = f.readlines() rstlstraw = [ inurl for line in filelines for inurl in splitmjurlfromtext(line) ] # drops the duplicates url rstlst = list(set(rstlstraw)) print(len(rstlst), len(rstlstraw), filename, dk) break except UnicodeDecodeError as eef: print(eef) continue except LookupError as eel: print(eel) if dk == 'Error': print(f"{filename}没办法用预设的字符集正确打开") break resultlst.extend(rstlst) resultlst = list(tuple(resultlst)) # print(resultlst[:10]) if (urlsnum := getcfpoptionvalue(f'evermuse_{ownpy}', ownername, 'urlsnum')): if urlsnum == len(resultlst): changed = False log.info(f"战绩链接数量暂无变化, till then is {len(resultlst)}.") else: changed = True urlsnumnew = len(resultlst) setcfpoptionvalue(f'evermuse_{ownpy}', ownername, 'urlsnum', f"{urlsnumnew}") log.info(f"战绩链接数 is set to {urlsnumnew} now.")
def chulixls_orderdetails(orderfile: Path): try: content = xlrd.open_workbook( filename=orderfile, encoding_override='gb18030') df = pd.read_excel(content, index_col=0, parse_dates=True, engine='xlrd') log.info(f'读取{orderfile}') # print(list(df.columns)) except UnicodeDecodeError as ude: log.critical(f'读取{orderfile}时出现解码错误。{ude}') return # ['日期', '单据编号', '摘要', '单位全名', '仓库全名', '商品编号', '商品全名', '规格', '型号', '产地', '单位', '数量', '单价', '金额', '数量1', '单价1', # '金额1', '数量2', '单价2', '金额2'] totalin = ['%.2f' % df.loc[df.index.max()]['数量'], '%.2f' % df.loc[df.index.max()]['金额']] # 从最后一行获取数量合计和金额合计,以备比较 print(df['日期'].iloc[1], end='\t') print(totalin, end='\t') # df[xiangmu[0]] = None # df = df.loc[:, ['日期', '单据编号', '单据类型', xiangmu[0], '摘要', '备注', '商品备注', xiangmu[1], # '单价', '单位', '数量', '金额', '单位全名', '仓库全名', '部门全名']] df = df.loc[:, df.columns[:-6]] df['日期'] = pd.to_datetime(df['日期']) # df['备注'] = df['备注'].astype(object) dfdel = df[ (df.单位全名.isnull().values == True) & ((df.单据编号.isnull().values == True) | (df.单据编号 == '小计') | (df.单据编号 == '合计'))] hangdel = list(dfdel.index) # print(hangdel) df1 = df.drop(hangdel) # 丢掉小计和合计行,另起DataFrame dfzhiyuan = df1[df1.单位全名.isnull().values == True] # 提取出项目名称行号 zyhang = list(dfzhiyuan.index) zyming = list(dfzhiyuan['单据编号']) # 项目名称 # 每次填充df到最后一行,依次滚动更新之 df['员工名称'] = None for i in range(len(zyhang)): df.loc[zyhang[i]:, '员工名称'] = zyming[i] # 丢掉项目名称行,留下纯数据 dfdel = df[df.单位全名.isnull().values == True] # print(dfdel[['日期', '单据编号', '数量', '金额']]) hangdel = list(dfdel.index) # print(hangdel) dfout = df.drop(hangdel) dfout.index = range(len(dfout)) dfout = pd.DataFrame(dfout) # print(dfout) # print(dfout.head(10)) log.info('共有%d条有效记录' % len(dfout)) # print(list(dfout.columns)) if (totalin[0] == '%.2f' % dfout.sum()['数量']) & (totalin[1] == '%.2f' % dfout.sum()['金额']): dfgrp = dfout.groupby(['员工名称']).sum()[['数量', '金额']] dfgrp.loc['汇总'] = dfgrp.sum() print(dfgrp.loc['汇总'].values) return dfout else: log.warning(f'对读入文件《{orderfile}》的数据整理有误!总数量和总金额对不上!') return
def getsinglepage(url: str): """ 获取输入url的内容,提取有效战绩数据;返回DataFrame格式的数据结果 """ mjhtml = requests.get(url) mjhtml.encoding = mjhtml.apparent_encoding log.info(f"网页内容编码为:\t{mjhtml.encoding}") soup = BeautifulSoup(mjhtml.text, 'lxml') if (souptitle := soup.title.text) == "404 Not Found": print(f"该网页无有效内容返回或者已经不存在\t{url}") return pd.DataFrame()
def df2smsdb(indf: pd.DataFrame, tablename="sms"): dbname = touchfilepath2depth(getdirmain() / "data" / "db" / f"phonecontact_{getdeviceid()}.db") checkphoneinfotable(dbname) conn = lite.connect(dbname) recordctdf = pd.read_sql(f"select * from {tablename}", con=conn) indf.to_sql(tablename, con=conn, if_exists="append", index=False) afterinsertctdf = pd.read_sql(f"select * from {tablename}", con=conn) conn.close() logstr = f"记录既有数量:\t{recordctdf.shape[0]}," + f"待添加的记录数量为:\t{indf.shape[0]}," + f"后的记录数量总计为:\t{afterinsertctdf.shape[0]}" log.info(logstr)
def fileetc_reply(msg): innermsg = formatmsg(msg) createtimestr = time.strftime("%Y%m%d", time.localtime(msg['CreateTime'])) filepath = getdirmain() / "img" / "webchat" / createtimestr filepath = filepath / f"{innermsg['fmSender']}_{msg['FileName']}" touchfilepath2depth(filepath) log.info(f"保存{innermsg['fmType']}类型文件:\t{str(filepath)}") msg['Text'](str(filepath)) innermsg['fmText'] = str(filepath) writefmmsg2txtandmaybeevernotetoo(innermsg)
def checkbatteryinfotable(dbname: str, tablename: str): """ 检查设备的电池信息数据表是否已经构建,设置相应的ini值避免重复打开关闭数据库文件进行检查 """ if not (batteryinfocreated := getcfpoptionvalue('everhard', str(getdeviceid()), 'batteryinfodb')): print(batteryinfocreated) csql = f"create table if not exists {tablename} (appendtime int PRIMARY KEY, percentage int, temperature float)" ifnotcreate(tablename, csql, dbname) setcfpoptionvalue('everhard', str(getdeviceid()), 'batteryinfodb', str(True)) logstr = f"数据表{tablename}在数据库{dbname}中构建成功" log.info(logstr)
def dbsql(dbin, csqlin): """ 在指定数据库连接中执行sql语句,通过cursor """ conn = lite.connect(dbin) cursor = conn.cursor() cursor.execute(csqlin) conn.commit() tcs = conn.total_changes logstr = f"数据库{dbin}中有{tcs}行数据被影响" log.info(logstr) conn.close()
def checkwcdelaytable(dbname: str, tablename: str): """ 检查和dbname(绝对路径)相对应的延时数据表是否已经构建,设置相应的ini值避免重复打开关闭数据库文件进行检查 """ if (wcdelaycreated := getcfpoptionvalue( 'everwebchat', os.path.abspath(dbname), tablename)) is None: print(wcdelaycreated) csql = f"create table if not exists {tablename} (id INTEGER PRIMARY KEY AUTOINCREMENT, msgtime int, delay int)" ifnotcreate(tablename, csql, dbname) setcfpoptionvalue('everwebchat', os.path.abspath(dbname), tablename, str(True)) logstr = f"数据表{tablename}在数据库{dbname}中构建成功" log.info(logstr)
def checknewthenupdatenote(): """ 查验程序文件是否有更新(文件时间作为判断标准)并更新至笔记 """ nbdf = findnotebookfromevernote() ttt = list() findfilesincluedef(getdirmain(), ttt, '.py') ptnfiledesc = re.compile(r"(?:^\"\"\"(.*?)\"\"\"$)", re.MULTILINE | re.DOTALL) ptnnamedesc = re.compile( r"""^def\s+((?:\w+)\(.*?\))\s*:\s*\n (?:\s+\"\"\"(.*?)\"\"\")?""", re.MULTILINE | re.DOTALL) protitle = 'p_ew_' netnblst = list(nbdf.名称) for fn in ttt: nbnamelst = fn.rsplit('/', 1) if len(nbnamelst) == 1: nbnamelst.insert(0, 'root') nbnamelst[0] = protitle + nbnamelst[0] # ['p_ew_jpy', 'chuqin.py'] nbname, filename = nbnamelst[0], nbnamelst[1] if (ennotetime := getcfpoptionvalue('evercode', nbname, filename)) is None: # 获取笔记本的guid,笔记本不存在则构建之 if (nbguid := getcfpoptionvalue('evercode', nbname, 'guid')) is None: logstr = f"笔记本《{nbname}》在ini中不存在,可能需要构造之。" log.info(logstr) if nbname in netnblst: nbguid = nbdf[nbdf.名称 == nbname].index.values[0] # print(nbguid) else: notebook = createnotebook(nbname) netnblst.append(nbname) nbguid = notebook.guid setcfpoptionvalue('evercode', nbname, "guid", nbguid) # 获取笔记的guid,笔记不存在则构建之 if (noteguid := getcfpoptionvalue('evercode', nbname, f'{filename}_guid')) is None: logstr = f"笔记《{filename}》在ini中不存在,可能需要构造之。" log.info(logstr) items = findnotefromnotebook(nbguid, filename) if len(items) > 0: # [noteguid, notetitle, note.updateSequenceNum] noteguid = items[-1][0] else: note = makenote(gettoken(), get_notestore(), filename, parentnotebook=nbguid) noteguid = note.guid
def chuliquandan(): """ 处理全单文件 """ workpath = dirmainpath / 'data/work' khqdnamelst = [x for x in os.listdir(workpath) if x.find('全单统计管理') >= 0] # print(khqdnamelst) # 对获取的合格文件根据时间进行排序,升序 khqdnamelst.sort(key=lambda fn: os.path.getmtime(workpath / fn)) newestonlyname = khqdnamelst[-1] newestfn = workpath / newestonlyname targetfn = dirmainpath / 'data' / '全单统计管理最新.xlsm' cfpdata, cfpdatapath = getcfp('everdata') if not cfpdata.has_section('dataraw'): cfpdata.add_section('dataraw') cfpdata.write(open(cfpdatapath, 'w', encoding='utf-8')) if not cfpdata.has_option('dataraw', 'quandannewestname'): cfpdata.set('dataraw', 'quandannewestname', '') cfpdata.write(open(cfpdatapath, 'w', encoding='utf-8')) if cfpdata.get('dataraw', 'quandannewestname') != newestonlyname: shutil.copy(newestfn, targetfn) cfpdata.set('dataraw', 'quandannewestname', newestonlyname) cfpdata.write(open(cfpdatapath, 'w', encoding='utf-8')) log.info(f"《全单统计管理》有新文件:{newestonlyname}") cnx = lite.connect(dbpathquandan) if gengxinfou(targetfn, cnx, 'fileread'): # or True: # workbook = xlrd.open_workbook(targetfn, encoding_override="cp936") # workbook = xlrd.open_workbook(targetfn) # sheet = workbook.sheet_by_name('全单统计管理') # # sheet的名称,行数,列数 # print (sheet.name,sheet.nrows,sheet.ncols) # datafromsheet = [sheet.row_values(i, 0 ,sheet.ncols) for i in # range(0, sheet.nrows)] # # print(datafromsheet[:5]) # df = pd.DataFrame(datafromsheet[1:], columns=datafromsheet[0]) # df = df.loc[:, ['往来单位全名', '往来单位编号', '联系人', '联系电话', '地址']] df = pd.read_excel(targetfn, sheet_name='全单统计管理', parse_dates=['订单日期', '送达日期', '收款日期']) print(df) itemnumberfromnote = getinivaluefromnote('datasource', 'randomnumber4customer') itemnunber2show = len( df) if len(df) < itemnumberfromnote else itemnumberfromnote print(df.loc[random.sample(range(0, len(df)), itemnunber2show), :]) df.to_sql(name='quandantjgl', con=cnx, if_exists='replace') log.info(f"写入{len(df)}条记录到quandantjgl数据表中") # read_excel()对于无指定编码的excel文件读取时一直无法解决编码的问题 # df = pd.read_excel(targetfn, encoding='cp936') # print(df) cnx.close()
def merge2note(dfdict, wcpath, notebookguid, newfileonly=False): """ 处理从文本文件读取生成的dfdict,分账户读取本地资源文件和笔记进行对照,并做相应更新或跳过 """ for name in dfdict.keys(): fllstfromnote = getnotelist(name, wcpath, notebookguid=notebookguid) ptn = f"wcitems_{name}_" + "\d{4}.xlsx" # wcitems_heart5_2201.xlsx xlsxfllstfromlocal = [ fl for fl in os.listdir(wcpath) if re.search(ptn, fl) ] if len(fllstfromnote) != len(xlsxfllstfromlocal): print(f"{name}的数据文件本地数量\t{len(xlsxfllstfromlocal)},云端笔记列表中为\t{len(fllstfromnote)}," \ "两者不等,先把本地缺的从网上拉下来") misslstfromnote = [ fl for fl in fllstfromnote if fl[0] not in xlsxfllstfromlocal ] for fl, guid in misslstfromnote: reslst = getnoteresource(guid) if len(reslst) != 0: for res in reslst: flfull = wcpath / fl fh = open(flfull, 'wb') fh.write(res[1]) fh.close() dftest = pd.read_excel(flfull) setcfpoptionvalue('everwcitems', fl, 'guid', guid) setcfpoptionvalue('everwcitems', fl, 'itemsnum', str(dftest.shape[0])) setcfpoptionvalue('everwcitems', fl, 'itemsnum4net', str(dftest.shape[0])) log.info( f"文件《{fl}》在本地不存在,从云端获取存入并更新ini(section:{fl},guid:{guid})" ) xlsxfllst = sorted( [fl for fl in os.listdir(wcpath) if re.search(ptn, fl)]) print(f"{name}的数据文件数量\t{len(xlsxfllst)}", end=",") if newfileonly: xlsxfllst = xlsxfllst[-2:] xflen = len(xlsxfllst) print(f"本次处理的数量为\t{xflen}") for xfl in xlsxfllst: print( f"{'-' * 15}\t{name}\t【{xlsxfllst.index(xfl) + 1}/{xflen}】\tBegin\t{'-' * 15}" ) dftest = pd.read_excel(wcpath / xfl).drop_duplicates() updatewcitemsxlsx2note(name, dftest, wcpath, notebookguid) print( f"{'-' * 15}\t{name}\t【{xlsxfllst.index(xfl) + 1}/{xflen}】\tDone!\t{'-' * 15}" )
def writeini(): """ evernote API调用次数写入配置文件以备调用。又及,函数写在这里还有个原因是global全局变量无法跨文件传递。 :return: """ global ENtimes # print(ENtimes) # print(str(datetime.datetime.now())) cfp.set('evernote', 'apicount', '%d' % ENtimes) cfp.set('evernote', 'apilasttime', '%s' % datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')) cfp.write(open(inifilepath, 'w', encoding='utf-8')) log.info('Evernote API调用次数:%d,写入配置文件%s' % (ENtimes, os.path.split(inifilepath)[1]))
def fetchweatherinfo_from_googledrive(): weatherdatalastestday = getnewestdataday('存储数据') today = datetime.datetime.now().strftime('%F') hour = int(datetime.datetime.now().strftime('%H')) if (today > weatherdatalastestday): # or True: df = getweatherfromgoogledrive() if df is not None: log.info('通过读Google drive表格,获取天气信息%d条。' % df.shape[0]) print(df['date'].max()) weathertxtlastestday = df['date'].max().strftime('%F') setcfpoptionvalue('everlife', '天气', '存储数据最新日期', '%s' % weathertxtlastestday) return df
def removeblanklinesfromtxt(fname): """ 去除文本文件中的空行 """ with open(fname, 'r') as f: fcontent = f.read() flst = fcontent.split('\n') blanklst = [x for x in flst if len(x) == 0] itemlst = [x for x in flst if len(x) > 0] log.info(f"文件《{fname}》内容行数量为:\t{len(itemlst)},空行数量为:\t{len(blanklst)}") if len(blanklst) != 0: with open(fname, 'w') as writer: writer.write('\n'.join(itemlst)) log.info(f"文件《{fname}》只保留内容行(去除了空行),成功写入!!!")
def makenote(tokenmn, notestore, notetitle, notebody='真元商贸——休闲食品经营专家', parentnotebook=None): """ 创建一个note :param tokenmn: :param notestore: :param notetitle: :param notebody: :param parentnotebook: :return: """ # global log nbody = "<?xml version=\"1.0\" encoding=\"UTF-8\"?>" nbody += "<!DOCTYPE en-note SYSTEM \"http://xml.evernote.com/pub/enml2.dtd\">" nbody += "<en-note>%s</en-note>" % notebody # Create note object ournote = Note() ournote.title = notetitle ournote.content = nbody # parentNotebook is optional; if omitted, default notebook is used if parentnotebook and hasattr(parentnotebook, 'guid'): ournote.notebookGuid = parentnotebook.guid # Attempt to create note in Evernote account try: note = notestore.createNote(tokenmn, ournote) evernoteapijiayi() log.info('笔记《' + notetitle + '》在笔记本《' + parentnotebook.name + '》中创建成功。') return note except EDAMUserException as usere: # Something was wrong with the note data # See EDAMErrorCode enumeration for error code explanation # http://dev.evernote.com/documentation/reference/Errors.html#Enum_EDAMErrorCode log.critical("用户错误!%s" % str(usere)) except EDAMNotFoundException as notfounde: # Parent Notebook GUID doesn't correspond to an actual notebook print("无效的笔记本guid(识别符)!%s" % str(notfounde)) except EDAMSystemException as systeme: if systeme.errorCode == EDAMErrorCode.RATE_LIMIT_REACHED: log.critical("API达到调用极限,需要 %d 秒后重来" % systeme.rateLimitDuration) exit(1) else: log.critical('创建笔记时出现严重错误:' + str(systeme)) exit(2)
def add_friend(msg): # 如何不是指定的数据分析中心和主账户,则不打招呼 thisid = getdeviceid() houseid = getinivaluefromnote('webchat', 'datahouse') mainaccount = getinivaluefromnote('webchat', 'mainaccount') helloword1 = getinivaluefromnote('webchat', 'helloword1') helloword2 = getinivaluefromnote('webchat', 'helloword2') men_wc = getcfpoptionvalue('everwebchat', get_host_uuid(), 'host_nickname') if (thisid != str(houseid) or (men_wc != mainaccount)): print(f"不是数据分析中心也不是主账号【{mainaccount}】,不用打招呼哟") return msg.user.verify() msg.user.send(f'Nice to meet you!\n{helloword1}\n{helloword2}') writefmmsg2txtandmaybeevernotetoo(msg) log.info(msg)
def checkphoneinfotable(dbname: str): """ 检查联系人和短信数据表是否已经构建,设置相应的ini值避免重复打开关闭数据库文件进行检查 """ # 联系人数据表检查构建 if not (phonecontactdb := getcfpoptionvalue('everpim', str(getdeviceid()), 'phonecontacttable')): tablename = "phone" print(phonecontactdb, tablename) csql = f"create table if not exists {tablename} (number str PRIMARY KEY not null unique on conflict ignore, name str, appendtime datetime)" ifnotcreate(tablename, csql, dbname) setcfpoptionvalue('everpim', str(getdeviceid()), 'phonecontacttable', str(True)) logstr = f"数据表{tablename}在数据库{dbname}中构建成功" log.info(logstr)
def fenxiyueduibi(sqlstr, xiangmu, notefenbudf, noteleixingdf, cnxf, pinpai='', cum=False): # global log log.info(sqlstr) xmclause = xiangmu[0] jineclause = ' and (金额 >= 0) ' brclause = '' if len(pinpai) > 0: brclause += ' and (品牌 = \'%s\') ' % pinpai sqlz = sqlstr % (xmclause, jineclause, brclause) dfz = pd.read_sql_query(sqlz, cnxf, parse_dates=['日期']) log.info(sqlz) cursor = cnxf.cursor() cursor.execute(f'attach database \'{dbpathdingdanmingxi}\' as \'C\'') sqlznew = sqlz.replace('xiaoshoumingxi', 'C.orderdetails') log.info(sqlznew) dfznew = pd.read_sql_query(sqlznew, cnxf, parse_dates=['日期']) dfznew = dfznew[dfznew.日期 >= pd.to_datetime('2018-11-1')] # 实际销售数据和订单品项数据的交界线 # print(dfznew) xmclause = xiangmu[1] jineclause = ' and (金额 < 0) ' sqlf = sqlstr % (xmclause, jineclause, brclause) dff = pd.read_sql_query(sqlf, cnxf, parse_dates=['日期']) log.info(sqlf) sqlfnew = sqlf.replace('xiaoshoumingxi', 'C.orderdetails') log.info(sqlznew) dffnew = pd.read_sql_query(sqlfnew, cnxf, parse_dates=['日期']) dffnew = dffnew[dffnew.日期 >= pd.to_datetime('2018-10-1')] # 实际销售数据和订单品项数据的交界线 # print(dffnew) cursor.execute('detach database \'C\'') cursor.close() df = pd.merge(dfz.append(dfznew), dff.append(dffnew), how='outer', on=['日期', '年月', '客户编码', '区域', '类型', '品牌'], sort=True) df.fillna(0, inplace=True) # print(df.tail(10)) kuangjiachutu(notefenbudf, noteleixingdf, df, xiangmu, cnxf, pinpai, cum)
def chengbenjiaupdateall(cnxc): dfsall = pd.read_sql_query( 'select * from xiaoshoumingxi order by 日期, 单据编号', cnxc, parse_dates=['日期']) del dfsall['index'] dfsall = chengbenjiaupdatedf(dfsall, cnxc) dfsall.to_sql(name='xiaoshoumingxi', con=cnxc, if_exists='replace', chunksize=10000) log.info('要更新%d记录中的成本价和毛利内容' % len(dfsall)) dfsall['年月'] = dfsall['日期'].apply( lambda x: datetime.datetime.strftime(x, '%Y%m')) print( dfsall.groupby('年月', as_index=False)[['数量', '成本金额', '金额', '毛利']].sum())