def generatePages(lang): menu = {} for page in config["pages"]: if lang in config["pages"][page]["metadata"].keys(): i = config["pages"][page]["metadata"]["id"] if "attribute" in config["pages"][page].keys(): if config["pages"][page]["attribute"]["hidden"]: pass else: menu[i] = [ config["pages"][page]["metadata"][lang]["html"], config["pages"][page]["metadata"][lang]["name"] ] else: menu[i] = [ config["pages"][page]["metadata"][lang]["html"], config["pages"][page]["metadata"][lang]["name"] ] menu = OrderedDict(sorted(menu.items(), key=lambda t: t[0])) for page in config["pages"]: if lang in config["pages"][page]["metadata"].keys(): id = config["pages"][page]["metadata"]["id"] f = open( "output/" + config["pages"][page]["metadata"][lang]["html"], 'w') if "content" in config["pages"][page].keys(): content = config["pages"][page]["content"][lang] f.write( html_minify( render_template("page." + lang + ".html", base=config["pages"][page]["metadata"] [lang]["base"], title=config["pages"][page]["metadata"] [lang]["name"], menu=menu, content=content, id=id))) else: f.write( html_minify( render_template("page." + lang + ".html", base=config["pages"][page]["metadata"] [lang]["base"], title=config["pages"][page]["metadata"] [lang]["name"], menu=menu, id=id)))
def value_for_field(self, obj, field): if field.startswith("card_"): _, fmt = field.rsplit("_", 1) if fmt not in ["bootstrap", "bulma"]: raise UnknownFieldError if obj.project is None: obj.reload("id", "project", "data") project = obj.project.fetch() ctx = { "cid": str(obj.id), "title": project.title, "references": project.references, "landing_page": f"/{project.id}/", "more": f"/{obj.id}", } ctx["descriptions"] = project.description.strip().split(".", 1) authors = [a.strip() for a in project.authors.split(",") if a] ctx["authors"] = {"main": authors[0], "etal": authors[1:]} ctx["data"] = j2h.convert( json=remap(obj.data, visit=visit, enter=enter), table_attributes= 'class="table is-narrow is-fullwidth has-background-light"', ) return html_minify(render_template(f"card_{fmt}.html", **ctx)) else: raise UnknownFieldError
def value_for_field(self, obj, field): if field.startswith("card_"): _, fmt = field.rsplit("_", 1) if fmt not in ["bootstrap", "bulma"]: raise UnknownFieldError if obj.project is None or not obj.data: # try data reload to account for custom queryset manager obj.reload("id", "project", "data") # obj.project is LazyReference & Projects uses custom queryset manager DocType = obj.project.document_type exclude = list(DocType._fields.keys()) only = ["title", "references", "description", "authors"] project = DocType.objects.exclude(*exclude).only(*only).with_id( obj.project.pk) ctx = { "cid": str(obj.id), "title": project.title, "references": project.references[:5], "landing_page": f"/projects/{project.id}/", "more": f"/contributions/{obj.id}", } ctx["descriptions"] = project.description.strip().split(".", 1) authors = [a.strip() for a in project.authors.split(",") if a] ctx["authors"] = {"main": authors[0], "etal": authors[1:]} ctx["data"] = j2h.convert( json=remap(obj.data, visit=visit, enter=enter), table_attributes= 'class="table is-narrow is-fullwidth has-background-light"', ) return html_minify(render_template(f"card_{fmt}.html", **ctx)) else: raise UnknownFieldError
def get(self, **kwargs): cid = kwargs["pk"] # only Fetch enabled try: card = super().get(**kwargs) # trigger DoesNotExist if necessary if not card["html"]: contrib = Contributions.objects.only("project", "data").get(pk=cid) info = Projects.objects.get(pk=contrib.project.id) ctx = {"cid": cid} ctx["title"] = info.title ctx["descriptions"] = info.description.strip().split(".", 1) authors = [a.strip() for a in info.authors.split(",") if a] ctx["authors"] = {"main": authors[0], "etal": authors[1:]} ctx["landing_page"] = f"/{contrib.project.id}/" ctx["more"] = f"/{cid}" ctx["urls"] = info.urls.values() card_script = get_resource_as_string("templates/linkify.min.js") card_script += get_resource_as_string( "templates/linkify-element.min.js" ) card_script += get_resource_as_string("templates/card.min.js") fd = fdict(contrib.data, delimiter=".") ends = [f".{qk}" for qk in quantity_keys] for key in list(fd.keys()): if any(key.endswith(e) for e in ends): value = fd.pop(key) if key.endswith(ends[0]): new_key = key.rsplit(".", 1)[0] # drop .display fd[new_key] = value data = fd.to_dict_nested() browser = get_browser() browser.execute_script(card_script, data) bs = BeautifulSoup(browser.page_source, "html.parser") ctx["data"] = bs.body.table # browser.close() rendered = html_minify(render_template("card.html", **ctx)) tree = html.fromstring(rendered) inline(tree) card = Cards.objects.get(pk=cid) card.html = html.tostring(tree.body[0]).decode("utf-8") card.save() return super().get(**kwargs) except DoesNotExist: card = None try: card = Cards.objects.only("pk").get(pk=cid) except DoesNotExist: # Card has never been requested before # create and save unexecuted card, also start entry to avoid rebuild on subsequent requests contrib = Contributions.objects.only("project", "is_public").get(pk=cid) card = Cards(pk=cid, is_public=contrib.is_public) card.save() return self.get(**kwargs) if card is not None: raise DoesNotExist( f"Card {card.pk} exists but user not in project group" )
def _render_ui_router_views(): env = Environment(loader=FileSystemLoader(DEV_TEMPLATE_DIR), trim_blocks=True) for view in UI_ROUTER_VIEWS: template = env.get_template(view).render(production=True) output_file = os.path.join(config.PROD_STATIC_FILE_DIRECTORY, view) with open(output_file, 'w+') as f: f.write(html_minify(str(template)))
def minify(file, content): if file.suffix == '.js': return jsmin(content) if file.suffix == '.html': return html_minify(content) if file.suffix == '.css': return css_minify(content) return content
def get(self, **kwargs): cid = kwargs["pk"] # only Fetch enabled qfilter = lambda qs: self.has_read_permission(request, qs.clone()) try: # trigger DoesNotExist if necessary (due to permissions or non-existence) card = self._resource.get_object(cid, qfilter=qfilter) if not card.html or not card.bulma: contrib = Contributions.objects.only("project", "data").get(pk=cid) info = Projects.objects.get(pk=contrib.project.id) ctx = info.to_mongo() ctx["cid"] = cid ctx["descriptions"] = info.description.strip().split(".", 1) authors = [a.strip() for a in info.authors.split(",") if a] ctx["authors"] = {"main": authors[0], "etal": authors[1:]} ctx["landing_page"] = f"/{contrib.project.id}/" ctx["more"] = f"/{cid}" data = contrib.to_mongo().get("data", {}) ctx["data"] = j2h.convert( json=remap(data, visit=visit), table_attributes= 'class="table is-bordered is-striped is-narrow is-hoverable is-fullwidth"', ) card.html = html_minify(render_template("card.html", **ctx)) card.bulma = html_minify( render_template("card_bulma.html", **ctx)) card.save() return self._resource.serialize(card, params=request.args) except DoesNotExist: card = None try: card = Cards.objects.only("pk").get(pk=cid) except DoesNotExist: # Card has never been requested before # create and save unexecuted card, also start entry to avoid rebuild on subsequent requests contrib = Contributions.objects.only("project", "is_public").get(pk=cid) card = Cards(pk=cid, is_public=contrib.is_public) card.save() return self.get(**kwargs) if card is not None: raise DoesNotExist( f"Card {card.pk} exists but user not in project group")
def build_html(html_description_path, i18n_json_path): with open(i18n_json_path, 'r') as i18n_json_r: i18n = json.load(i18n_json_r) with open(html_description_path, 'r') as html_description: description = html_description.read() i18n['widget']['description'] = html_minify(description) with open(i18n_json_path, 'w') as i18n_json_w: i18n_json_w.write(json.dumps(i18n, ensure_ascii=False, indent=2))
def make_news(sites, output=None, title='"News"'): news = gather_news(sites) date = datetime.datetime.now().strftime("%Y-%m-%d %H:%M") html = make_html(news, title, date) tmp_out = tempfile.mkstemp()[1] with open(tmp_out,'w') as f: f.write(html) css = purge_css(os.path.join(module_path, 'blunt.css'), tmp_out) html = html_minify(html.replace('<style id="blunt"></style>', f'<style>{css}</style>')) return html
def easy_minify(data, tool=None): try: if not tool: data = css_html_js_minify.html_minify(data) else: if tool == 'css': data = css_html_js_minify.css_minify(data) elif tool == 'js': data = css_html_js_minify.js_minify(data) except: data = re.sub('\n +<', '\n<', data) data = re.sub('>(\n| )+<', '> <', data) return last_change(data)
def build_html(): # create all pages in list for page in list_of_pages: print(f'Building {page}') with urllib.request.urlopen( f'http://127.0.0.1:5000/{page}.html') as response: html = response.read() html = html.decode('utf-8') html = htmlmin.minify(html, remove_comments=True, remove_empty_space=True) html = html_minify(html) html = remove_spaces(html) save_file(html, f'{page}.html', '/build')
def as_dict(self, minify): ret = { "type": "html", "title": self.subs["title"], "body": html_minify(self.subs["body"]) if minify else str(self.subs["body"]), "complete": True } if self.subs["css"] != "": ret["css"] = self.subs["css"] return ret
def get(self, cid): """Retrieve card for a single contribution. --- operationId: get_card parameters: - name: cid in: path type: string pattern: '^[a-f0-9]{24}$' required: true description: contribution ID (ObjectId) responses: 200: description: contribution card schema: type: string """ ctx = {'cid': cid} mask = ['project', 'identifier', 'content.data'] contrib = Contributions.objects.only(*mask).get(id=cid) info = Projects.objects.get(project=contrib.project) ctx['title'] = info.title ctx['descriptions'] = info.description.strip().split('.', 1) authors = [a.strip() for a in info.authors.split(',') if a] ctx['authors'] = {'main': authors[0], 'etal': authors[1:]} debug = current_app.config['DEBUG'] ctx['landing_page'] = f'/{contrib.project}' ctx['more'] = f'/explorer/{cid}' ctx['urls'] = info.urls.values() card_script = get_resource_as_string('templates/linkify.min.js') card_script += get_resource_as_string( 'templates/linkify-element.min.js') card_script += get_resource_as_string('templates/card.min.js') data = unflatten( dict((k.rsplit('.', 1)[0] if k.endswith('.display') else k, v) for k, v in nested_to_record(contrib.content.data, sep='.').items() if not k.endswith('.value') and not k.endswith('.unit'))) browser = get_browser() browser.execute_script(card_script, data) bs = BeautifulSoup(browser.page_source, 'html.parser') ctx['data'] = bs.body.table browser.close() rendered = html_minify(render_template('card.html', **ctx)) tree = html.fromstring(rendered) inline(tree) card = html.tostring(tree.body[0]).decode('utf-8') return card
def build_web_app_pages(): pages_dir = './pages' directory = fsencode(pages_dir) for file in listdir(directory): file_name = fsdecode(file) if file_name.endswith('.html'): print('HTML file found: \"./pages/' + file_name + '\"') page = '' with open(pages_dir + '/' + file_name, 'r', encoding="utf-8") as html_file: for line in html_file.readlines(): page = page + line with open('./build/pages/' + file_name, 'w+', encoding="utf-8") as built_html_file: built_html_file.write(html_minify(page))
def convert(inName): with open("./svgs/"+inName, "r") as f: contents = f.read() contents = contents.split("<style type=\"text/css\">") l = contents[0] r = contents[1].split("</style>")[1] contents = (l+r).split("<clipPath")[0]+"</g></svg>" contents = contents.replace("lightgray", "black") contents = contents.split("<g transform=\"scale(1, -1) translate(0, -900)\">")[1].split("</g>")[0] contents = html_minify(contents) al = contents.split("<path") al = al[1:] contents = [] for i in al: contents.append("<path"+i) contents = json.dumps(contents) with open("./paths/"+inName.split(".")[0]+".path", "w") as z: z.write(contents)
def minify_email_html( html: str, save_path: Union[str, Path] = None, include_comments: bool = False ) -> str: """ Minifies provided html. Warning: Returned HTML may not display as expected in email clients. Args: html (str): Source html to minified. save_path (Union[str, Path], optional): Path to save file to. Defaults to None. Minified file will not be saved. include_comments (bool, Optional): Whether to include comments. Defaults to False. Returns: str: minified version of provided HTML """ minified_html = html_minify(html, comments=include_comments) if save_path: save_path = str(save_path) with open(save_path, "w") as fout: fout.writelines(minified_html) return minified_html
def process_html(filepath, source, dest, config): with open(filepath, 'r+', encoding="utf-8") as file: changes = [] lines = file.readlines() for line in range(len(lines)): if '<!--#include' in lines[line]: line_to_replace = line replacewith = lines[line].split(' ')[1].split('"')[1] changes.append((line_to_replace, replacewith)) for line, change in changes[::-1]: with open(os.path.join(source,change), 'r', encoding="utf-8") as change: change = change.readlines() lines = lines[0: line] + change + lines[line+1:] dest = os.path.join(dest, filepath[len(source)+1:]) new_file = '\n'.join(lines) if config['MINIFY']['html'] == 'True': new_file = html_minify(new_file) save_file(new_file, dest)
def get(self, cid): """Retrieve card for a single contribution. --- operationId: get_card parameters: - name: cid in: path type: string pattern: '^[a-f0-9]{24}$' required: true description: contribution ID (ObjectId) responses: 200: description: contribution card schema: type: string """ ctx = {'cid': cid} mask = ['project', 'identifier', 'content.data'] contrib = Contributions.objects.only(*mask).get(id=cid) info = Projects.objects.get(project=contrib.project) ctx['title'] = info.title ctx['descriptions'] = info.description.strip().split('.', 1) authors = [a.strip() for a in info.authors.split(',') if a] ctx['authors'] = {'main': authors[0], 'etal': authors[1:]} debug = current_app.config['DEBUG'] ctx['landing_page'] = f'/{contrib.project}' ctx['more'] = f'/explorer/{cid}' ctx['urls'] = info.urls.values() card_script = get_resource_as_string('templates/card.min.js') data = contrib.content.data browser = get_browser() browser.execute_script(card_script, data) src = browser.page_source.encode("utf-8") browser.close() bs = BeautifulSoup(src, 'html.parser') ctx['data'] = bs.body.table rendered = html_minify(render_template('card.html', **ctx)) tree = html.fromstring(rendered) inline(tree) return html.tostring(tree.body[0])
def __call__(self, request): # response = self.get_response(request) if (not self.enabled) or (not isinstance(response, HttpResponse)): return response content_type = response.get('Content-Type') if not content_type.startswith('text/html'): return response # match = re.search('charset=([a-zA-Z0-9_\-]+)', content_type) charset = match.group(1) if match else 'utf-8' try: body = response.content.decode(charset) body = html_minify(body) response.content = body.encode(charset) except: # NOQA pass # ignore errors return response
def render(args, md): logging.info('Start rendering') template = '''<!DOCTYPE html> <!-- Generated with md2html {version} Homepage: https://github.com/Phuker/md2html --> <html> <head> {head_insert}<meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1, minimal-ui"> <title>{title}</title> {css_html_block} {head_append}</head> <body> {body_insert}<div class="markdown-body"> {html_content} </div> {body_append}</body> </html> ''' title = args.title head_insert = ''.join([_ + '\n' for _ in args.head_insert]) head_append = ''.join([_ + '\n' for _ in args.head_append]) body_insert = ''.join([_ + '\n' for _ in args.body_insert]) body_append = ''.join([_ + '\n' for _ in args.body_append]) css_file_list = [ os.path.join(args.script_dir, 'github-markdown.css'), os.path.join(args.script_dir, 'pygments.css'), os.path.join(args.script_dir, 'main.css'), ] addon_styles = { 'sidebar-toc': 'style-sidebar-toc.css', 'dark': 'style-dark.css', } for style_name in args.style: css_file_list.append( os.path.join(args.script_dir, addon_styles[style_name])) css_file_list += args.append_css css_content_list = [read_file(_) for _ in css_file_list] if args.min_css: logging.info('Minify CSS') size_old = sum(map(len, css_content_list)) css_content_list = [ css_minify(_, comments=False) for _ in css_content_list ] size_new = sum(map(len, css_content_list)) logging.info('Size shrunk %d B/%d B = %.2f %%', size_old - size_new, size_old, (size_old - size_new) / size_old * 100) css_html_block = '\n'.join([ '<style type="text/css">\n' + _ + '\n</style>' for _ in css_content_list ]) logging.info('Converting Markdown') html_content = convert(md) if args.min_html: logging.info('Minify HTML') size_old = len(html_content) html_content = html_minify(html_content, comments=False) size_new = len(html_content) logging.info('Size shrunk %d B/%d B = %.2f %%', size_old - size_new, size_old, (size_old - size_new) / size_old * 100) template_args = { 'version': __version__, 'title': escape(title), 'css_html_block': css_html_block, 'html_content': html_content, 'head_insert': head_insert, 'head_append': head_append, 'body_insert': body_insert, 'body_append': body_append, } return template.format(**template_args)
os.mkdir(srcroot) for filename in os.listdir(webroot): basename = re.sub("[^0-9a-zA-Z]+", "_", filename) srcfile = webroot + "/" + filename dstfile = srcroot + "/" + basename + ".h" varname = basename.upper() with open(srcfile, encoding="utf-8") as f: content = f.read().replace("${version}", version) try: if filename.endswith(".html"): content = html_minify(content) elif filename.endswith(".css"): content = css_minify(content) elif filename.endswith(".js") or filename.endswith(".json"): content = js_minify(content) except: print("WARN: Unable to minify") with open(dstfile, "w") as dst: dst.write("const char ") dst.write(varname) dst.write("[] PROGMEM = R\"==\"==(") dst.write(content) dst.write(")==\"==\";\n") dst.write("const int "); dst.write(varname)
def re_error(data): if (data == '/ban'): ip = ip_check() end = '권한이 맞지 않는 상태 입니다.' if (ban_check() == 1): curs.execute("select end, why from ban where block = ?", [ip]) d = curs.fetchall() if (not d): m = re.search("^([0-9]{1,3}\.[0-9]{1,3})", ip) if (m): curs.execute( "select end, why from ban where block = ? and band = 'O'", [m.groups()[0]]) d = curs.fetchall() if (d): if (d[0][0]): end = d[0][0] + ' 까지 차단 상태 입니다. / 사유 : ' + d[0][1] now = re.sub(':', '', get_time()) now = re.sub('\-', '', now) now = int(re.sub(' ', '', now)) day = re.sub('\-', '', d[0][0]) if (re.search(':', day)): day = re.sub('( |:)', '', day) else: day += '000000' if (now >= int(day)): curs.execute("delete from ban where block = ?", [ip]) conn.commit() end = '차단이 풀렸습니다. 다시 시도 해 보세요.' else: end = '영구 차단 상태 입니다. / 사유 : ' + d[0][1] return (html_minify( template('index', imp=['권한 오류', wiki_set(1), custom(), other2([0, 0])], data=end, menu=0))) d = re.search('\/error\/([0-9]+)', data) if (d): num = int(d.groups()[0]) if (num == 1): title = '권한 오류' data = '비 로그인 상태 입니다.' elif (num == 2): title = '권한 오류' data = '이 계정이 없습니다.' elif (num == 3): title = '권한 오류' data = '권한이 모자랍니다.' elif (num == 4): title = '권한 오류' data = '관리자는 차단, 검사 할 수 없습니다.' elif (num == 5): title = '사용자 오류' data = '그런 계정이 없습니다.' elif (num == 6): title = '가입 오류' data = '동일한 아이디의 사용자가 있습니다.' elif (num == 7): title = '가입 오류' data = '아이디는 20글자보다 짧아야 합니다.' elif (num == 8): title = '가입 오류' data = '아이디에는 한글과 알파벳과 공백만 허용 됩니다.' elif (num == 9): title = '파일 올리기 오류' data = '파일이 없습니다.' elif (num == 10): title = '변경 오류' data = '비밀번호가 다릅니다.' elif (num == 11): title = '로그인 오류' data = '이미 로그인 되어 있습니다.' elif (num == 12): title = '편집 오류' data = '누군가 먼저 편집 했습니다.' elif (num == 16): title = '파일 올리기 오류' data = '파일 이름을 다른 걸로 설정 해주세요.' elif (num == 14): title = '파일 올리기 오류' data = 'jpg, gif, jpeg, png, webp만 가능 합니다.' elif (num == 15): title = '편집 오류' data = '편집 기록은 500자를 넘을 수 없습니다.' elif (num == 16): title = '파일 올리기 오류' data = '동일한 이름의 파일이 있습니다.' elif (num == 17): title = '파일 올리기 오류' data = '파일 용량은 ' + wiki_set(3) + 'MB를 넘길 수 없습니다.' elif (num == 18): title = '편집 오류' data = '내용이 원래 문서와 동일 합니다.' elif (num == 19): title = '이동 오류' data = '이동 하려는 곳에 문서가 이미 있습니다.' elif (num == 20): title = '비밀번호 오류' data = '재 확인이랑 비밀번호가 다릅니다.' if (title): return (html_minify( template('index', imp=[title, wiki_set(1), custom(), other2([0, 0])], data=data, menu=0))) else: return (redirect('/')) else: return (redirect('/'))
def custom_render_template(template: Template, *args, **kwargs): rendered = template.render(*args, **kwargs) minified = html_minify(rendered) return minified
def run(self, text): from css_html_js_minify import html_minify return html_minify(text)
def render_template (cls, template, **kwargs): return html_minify(template.render(kwargs)).replace(" ", " ").replace("> <", "><").replace(" >", ">")
def re_error(conn, data): curs = conn.cursor() if (data == '/ban'): ip = ip_check() end = '|| 사유 || 권한이 맞지 않는 상태 입니다. ||' if (ban_check(conn) == 1): curs.execute("select end, why from ban where block = ?", [ip]) d = curs.fetchall() if (not d): m = re.search("^([0-9]{1,3}\.[0-9]{1,3})", ip) if (m): curs.execute( "select end, why from ban where block = ? and band = 'O'", [m.groups()[0]]) d = curs.fetchall() if (d): end = '|| 상태 ||' if (d[0][0]): now = int(re.sub('(:|-| )', '', get_time())) day = re.sub('\-', '', d[0][0]) if (re.search(':', day)): day = re.sub('( |:)', '', day) else: day += '000000' if (now >= int(day)): curs.execute("delete from ban where block = ?", [ip]) conn.commit() end += '차단이 풀렸습니다. 다시 시도 해 보세요.' else: end += d[0][0] + ' 까지 차단 상태 입니다.' else: end += '영구 차단 상태 입니다.' end += '||' if (d[0][1] != ''): end += '\r\n|| 사유 || ' + d[0][1] + ' ||' return (html_minify( template( 'index', imp=['권한 오류', wiki_set(conn, 1), custom(conn), other2([0, 0])], data=namumark(conn, "", "[목차(없음)]\r\n== 권한 상태 ==\r\n" + end, 0, 0, 0), menu=0))) d = re.search('\/error\/([0-9]+)', data) if (d): num = int(d.groups()[0]) if (num == 1): title = '권한 오류' data = '비 로그인 상태 입니다.' elif (num == 2): title = '권한 오류' data = '이 계정이 없습니다.' elif (num == 3): title = '권한 오류' data = '권한이 모자랍니다.' elif (num == 4): title = '권한 오류' data = '관리자는 차단, 검사 할 수 없습니다.' elif (num == 5): title = '사용자 오류' data = '그런 계정이 없습니다.' elif (num == 6): title = '가입 오류' data = '동일한 아이디의 사용자가 있습니다.' elif (num == 7): title = '가입 오류' data = '아이디는 20글자보다 짧아야 합니다.' elif (num == 8): title = '가입 오류' data = '아이디에는 한글과 알파벳과 공백만 허용 됩니다.' elif (num == 9): title = '파일 올리기 오류' data = '파일이 없습니다.' elif (num == 10): title = '변경 오류' data = '비밀번호가 다릅니다.' elif (num == 11): title = '로그인 오류' data = '이미 로그인 되어 있습니다.' elif (num == 12): title = '편집 오류' data = '누군가 먼저 편집 했습니다.' elif (num == 13): title = '리캡차 오류' data = '리캡차를 통과하세요.' elif (num == 14): title = '파일 올리기 오류' data = 'jpg, gif, jpeg, png, webp만 가능 합니다.' elif (num == 15): title = '편집 오류' data = '편집 기록은 500자를 넘을 수 없습니다.' elif (num == 16): title = '파일 올리기 오류' data = '동일한 이름의 파일이 있습니다.' elif (num == 17): title = '파일 올리기 오류' data = '파일 용량은 ' + wiki_set(conn, 3) + 'MB를 넘길 수 없습니다.' elif (num == 18): title = '편집 오류' data = '내용이 원래 문서와 동일 합니다.' elif (num == 19): title = '이동 오류' data = '이동 하려는 곳에 문서가 이미 있습니다.' elif (num == 20): title = '비밀번호 오류' data = '재 확인이랑 비밀번호가 다릅니다.' elif (num == 21): title = '편집 오류' data = '편집 필터에 의해 검열 되었습니다.' else: title = '정체 불명의 오류' data = '???' if (title): return (html_minify( template('index', imp=[ title, wiki_set(conn, 1), custom(conn), other2([0, 0]) ], data=namumark(conn, "", "[목차(없음)]\r\n== 오류 발생 ==\r\n" + data, 0, 0, 0), menu=0))) else: return (redirect('/')) else: return (redirect('/'))
def re_error(conn, data): curs = conn.cursor() if data == '/ban': ip = ip_check() end = '<li>사유 : 권한이 맞지 않는 상태 입니다.</li>' if ban_check(conn) == 1: curs.execute("select end, why from ban where block = ?", [ip]) d = curs.fetchall() if not d: m = re.search("^([0-9]{1,3}\.[0-9]{1,3})", ip) if m: curs.execute("select end, why from ban where block = ? and band = 'O'", [m.groups()[0]]) d = curs.fetchall() if d: end = '<li>상태 : ' if d[0][0]: now = int(re.sub('(:|-| )', '', get_time())) day = re.sub('\-', '', d[0][0]) if re.search(':', day): day = re.sub('( |:)', '', day) else: day += '000000' if now >= int(day): curs.execute("delete from ban where block = ?", [ip]) conn.commit() end += '차단이 풀렸습니다. 다시 시도 해 보세요.' else: end += d[0][0] + ' 까지 차단 상태 입니다.' else: end += '영구 차단 상태 입니다.' end += '</li>' if d[0][1] != '': end += '<li>사유 : ' + d[0][1] + '</li>' return html_minify(render_template('index.html', imp = ['권한 오류', wiki_set(conn, 1), custom(conn), other2([0, 0])], data = '<h2>권한 상태</h2><ul>' + end + '</ul>', menu = 0 )) d = re.search('\/error\/([0-9]+)', data) if d: num = int(d.groups()[0]) if num == 1: title = '권한 오류' data = '비 로그인 상태 입니다.' elif num == 2: title = '권한 오류' data = '이 계정이 없습니다.' elif num == 3: title = '권한 오류' data = '권한이 모자랍니다.' elif num == 4: title = '권한 오류' data = '관리자는 차단, 검사 할 수 없습니다.' elif num == 5: title = '사용자 오류' data = '그런 계정이 없습니다.' elif num == 6: title = '가입 오류' data = '동일한 아이디의 사용자가 있습니다.' elif num == 7: title = '가입 오류' data = '아이디는 20글자보다 짧아야 합니다.' elif num == 8: title = '가입 오류' data = '아이디에는 한글과 알파벳과 공백만 허용 됩니다.' elif num == 9: title = '파일 올리기 오류' data = '파일이 없습니다.' elif num == 10: title = '변경 오류' data = '비밀번호가 다릅니다.' elif num == 11: title = '로그인 오류' data = '이미 로그인 되어 있습니다.' elif num == 12: title = '편집 오류' data = '누군가 먼저 편집 했습니다.' elif num == 13: title = '리캡차 오류' data = '리캡차를 통과하세요.' elif num == 14: title = '파일 올리기 오류' data = 'jpg, gif, jpeg, png, webp만 가능 합니다.' elif num == 15: title = '편집 오류' data = '편집 기록은 500자를 넘을 수 없습니다.' elif num == 16: title = '파일 올리기 오류' data = '동일한 이름의 파일이 있습니다.' elif num == 17: title = '파일 올리기 오류' data = '파일 용량은 ' + wiki_set(conn, 3) + 'MB를 넘길 수 없습니다.' elif num == 18: title = '편집 오류' data = '내용이 원래 문서와 동일 합니다.' elif num == 19: title = '이동 오류' data = '이동 하려는 곳에 문서가 이미 있습니다.' elif num == 20: title = '비밀번호 오류' data = '재 확인이랑 비밀번호가 다릅니다.' elif num == 21: title = '편집 오류' data = '편집 필터에 의해 검열 되었습니다.' elif num == 22: title = '파일 올리기 오류' data = '파일 이름은 알파벳, 한글, 띄어쓰기, 언더바, 빼기표만 허용 됩니다.' else: title = '정체 불명의 오류' data = '???' if title: return html_minify(render_template('index.html', imp = [title, wiki_set(conn, 1), custom(conn), other2([0, 0])], data = '<h2>오류 발생</h2><ul><li>' + data + '</li></ul>', menu = 0 )) else: return redirect('/') else: return redirect('/')
for outputFileName, contentFiles in job.items(): goCode += " if name == \"" + outputFileName + "\" {\n" output = "" # file content BEFORE converting to hex representation # generate html output if isHtml(outputFileName): print("Building html file '", outputFileName, "'") for file in contentFiles: print(" ", file) fh = open(os.path.join(resDir, file), "r") if isHtml(file): output += html_minify(fh.read()) else: print( " \033[91m\033[1mERROR: NO VALID FILE EXTENSION. SUPPORTED: .html\033[0m" ) fh.close() # generate template output (this one is a litte bit weird and needs a clean up... :| ) # TODO: support more than one level in the file system elif isTpl(outputFileName): print("Building template js file '", outputFileName, "'") output += "window._tpls = {" for file in contentFiles: print(" ", file)
def _process(cls, content): return html_minify(content)
def process_document(filepath, source, do_modtime_check=True): # 解析 Markdown 文本 parser = markdown.Markdown( extensions=MARKDOWN_EXTENSIONS, extension_configs=MARKDOWN_CONFIG ) content, temp = preproc.process(source) if temp: warn("This is a temporary post. Skipped.") return True content = parser.convert(content.replace(chr(8203), "")) # Remove non-width spaces # 准备 meta 数据 try: meta = parser.Meta if len(meta) == 0: raise AttributeError except AttributeError: warn("No metadata. Skipped.") return True mdinfo = {} for key, val in METAINFO_DEFAULTS.items(): if val is None and key not in meta: error("Missing metainfo '%s'. Stopped." % key) return False if key not in meta: mdinfo[key] = val elif type(val) == bool: mdinfo[key] = True if meta[key] == "true" else False elif type(val) == list: mdinfo[key] = meta[key] else: mdinfo[key] = meta[key][0] # 准备页面模板参数 toc, content = cut_toc(content) title = mdinfo["title"] create_time = generate_time(*mdinfo["create"].split(".")) modified_time = generate_time(*mdinfo["modified"].split(".")) if do_modtime_check and modified_time != generate_date(datetime.datetime.now()): warn("Modified time is not updated to today. (%s)" % filepath) tags = TagGroup() for x in mdinfo["tags"]: tags.append(x) folder = os.path.abspath(os.path.join(os.path.dirname(filepath), mdinfo["location"])) new_file = os.path.join(folder, os.path.splitext(os.path.basename(filepath))[0] + ".html") navigater.handle("myself", new_file) navigater.home_folder = os.path.abspath(".") index_title = escape_string(title) index_text = escape_string(bs4.BeautifulSoup(content, BEAUTIFUL_SOUP_PARSER).text) index_url = navigater.get_path("myself") filename = os.path.basename(filepath) words = len(index_text) pagetitle = mdinfo["title"].strip().replace("\"", " ") pagekey = hashlib.md5(pagetitle.encode("utf8")).hexdigest() pageurl = SITE_DOMAIN + os.path.relpath( os.path.abspath(filepath), start=os.path.abspath("."))[:-3] + ".html" # 写入文件 template_file = os.path.join(TEMPLATES_FOLDER, "%s.html" % mdinfo["template"]) with open(template_file) as reader: template = reader.read() os.makedirs(folder, exist_ok=True) with open(new_file, "w") as writer: html_dom = template.format( title=title, create=create_time, modified=modified_time, stat=STAT_TEMPLATE.format( word=words, time=convert_time(words // WORDS_PER_MINUTE)), tags=str(tags), toc=toc, content=content, page_key=pagekey, page_title=pagetitle, page_url=pageurl, mdname=filename, github_location=os.path.join(GITHUB_LOCATION, os.path.dirname(index_url), filename) ) if html_minify: writer.write(html_minify(html_dom)) else: writer.write(html_dom) # 返回索引信息 if mdinfo["index"]: return (index_title, index_url, index_text, mdinfo) return index_title
def clean_html(inp): if settings.get_settings("DEV_MODE"): return inp else: return html_minify(inp)
def process_document(filepath, source, do_modtime_check=True): # 解析 Markdown 文本 parser = markdown.Markdown(extensions=MARKDOWN_EXTENSIONS, extension_configs=MARKDOWN_CONFIG) content, temp = preproc.process(source) if temp: warn("This is a temporary post. Skipped.") return True content = parser.convert(content.replace(chr(8203), "")) # Remove non-width spaces # 准备 meta 数据 try: meta = parser.Meta if len(meta) == 0: raise AttributeError except AttributeError: warn("No metadata. Skipped.") return True mdinfo = {} for key, val in METAINFO_DEFAULTS.items(): if val is None and key not in meta: error("Missing metainfo '%s'. Stopped." % key) return False if key not in meta: mdinfo[key] = val elif type(val) == bool: mdinfo[key] = meta[key][0] in ["true", "True", True] elif type(val) == list: mdinfo[key] = meta[key] else: mdinfo[key] = meta[key][0] # if key in meta: # debug(f'{key}: [{repr(type(meta[key]))}]{repr(meta[key])} → [{repr(type(mdinfo[key]))}]{repr(mdinfo[key])}') # else: # debug(f'{key}: [{repr(type(val))}]{repr(val)} → [{repr(type(mdinfo[key]))}]{repr(mdinfo[key])}') # 准备页面模板参数 toc, content = cut_toc(content) title = mdinfo["title"] create_time = generate_time(*mdinfo["create"].split(".")) modified_time = generate_time(*mdinfo["modified"].split(".")) if do_modtime_check and modified_time != generate_date( datetime.datetime.now()): warn("Modified time is not updated to today. (%s)" % filepath) tags = TagGroup() for x in mdinfo["tags"]: tags.append(x) folder = os.path.abspath( os.path.join(os.path.dirname(filepath), mdinfo["location"])) new_file = os.path.join( folder, os.path.splitext(os.path.basename(filepath))[0] + ".html") navigater.handle("myself", new_file) navigater.handle("md_url", filepath) navigater.home_folder = os.path.abspath(".") index_title = escape_string(title) index_text = escape_string(bs4.BeautifulSoup(content, "html5lib").text) index_url = navigater.get_path("myself") filename = os.path.basename(filepath) words = len(index_text) pagetitle = mdinfo["title"].strip().replace("\"", " ") pagekey = hashlib.md5(pagetitle.encode("utf8")).hexdigest() pageurl = SITE_DOMAIN + os.path.relpath( os.path.abspath(filepath), start=os.path.abspath("."))[:-3] + ".html" # 写入文件 template_file = os.path.join(TEMPLATES_FOLDER, "%s.html" % mdinfo["template"]) with open(template_file) as reader: template = reader.read() os.makedirs(folder, exist_ok=True) with open(new_file, "w") as writer: html_dom = template.format( title=title, create=create_time, modified=modified_time, stat=STAT_TEMPLATE.format(word=words, time=convert_time(words // WORDS_PER_MINUTE)), tags=str(tags), toc=toc, content=content, page_key=pagekey, page_title=pagetitle, page_url=pageurl, mdname=filename, github_location=os.path.join(GITHUB_LOCATION, navigater.get_path("md_url"))) if html_minify: writer.write(html_minify(html_dom)) else: writer.write(html_dom) # 返回索引信息 if mdinfo["index"]: return (index_title, index_url, index_text, mdinfo) return index_title