Example #1
0
# -*- coding: utf-8 -*-
import os
import dateutil.parser
from pydozer import BlogPost
from slugify import slugify
from first_post import first_post

second_post = BlogPost()

# These are all optional
second_post.data['author'] = 'Also Not Tod'
second_post.data['tags'] = ['third', 'testing', 'generated']  # Yes, I tagged it third.
second_post.data['extra_headers'] = """
    <link rel="stylesheet" href="extra.css" type="text/css" />
    <script type="text/javascript" src="extra.js"></script>
"""

# These are not optional at all
second_post.data['posted_date'] = dateutil.parser.parse('Fri Sep 26 20:36:28 MST 2003')
second_post.data['title'] = 'Oh hello, oh hello, second post to you'
second_post.data['filename'] = slugify(unicode(second_post.data['title']))
second_post.data['content'] = """
<p>This should be our second post.  This is how I'm linking to the <a href='%s/%s'>%s</a> (second post for those reading
the template file) and I could do other ways if I wanted.  It's just python.</p>
""" % (second_post.data['config']['blog_base_dir'], first_post.data['filename'], first_post.data['title'])
Example #2
0
# -*- coding: utf-8 -*-
import os
import dateutil.parser
from pydozer import BlogPost
from slugify import slugify

nineteen_design = BlogPost()

# These are all optional
nineteen_design.data['author'] = 'Tod Hansmann'
nineteen_design.data['tags'] = ['tech', 'design', 'architecture', 'opinions']

# These are not optional at all
nineteen_design.data['posted_date'] = dateutil.parser.parse('Wed Oct 21 14:00:00 MST 2015')
nineteen_design.data['title'] = '1995 Oriented Design'
nineteen_design.data['filename'] = slugify(unicode(nineteen_design.data['title']))
nineteen_design.data['hook'] = """
<p>We are not designers here at NSC.  We have an artist, but not a web designer of any kind, so how do you design a UI when all you have is really good engineers and no idea what "pretty" is outside your world?</p>
"""
nineteen_design.data['content'] = """
<p>We started making <a href="https://phonejanitor.com">Phone Janitor</a> in June.  We had the idea a long time ago, but it was time to put it into action.  It was simple, give people the ability to make all their annoying calls go away by controlling who could get through to them, and how.</p>
<p>The only problem is we had (and still have) no idea what makes a pretty UI, and had to start somewhere.  We decided to start by prototyping user interaction flow.  This meant we needed to be able to iterate, have it usable, but we weren't going to be able to do it pretty.  We aren't really web devs by trade, more systems developers.</p>
<div class="section_header">Introducing the 90s, again for the first time</div>
<img class="article_img" src="/images/early-prototype.png"/>
<div class="caption">Our never published UI.</div>
<p>We started with what we knew really well: Geocities level web design.  This style used marquees, crappy javascript counters, iframes, and tables.  It was never meant to be seen by actual customers using the thing, so we got to be a bit silly and threw it together in an afternoon.  This had two benefits.</p>
<p>First, we could do it really fast and didn't have to focus on how it looked but how it performed.  Second, we didn't have to spend a lot of time learning a new tech stack just to get a prototype.  The old web still does indeed work on modern browsers (though it's slower somehow).</p>
<p>We had a working prototype of the frontend in about 3 days, all inclusive and that was with several iterations and tests with friends (who are not devs, and certainly didn't use the web back then).  This separated feedback about function from feedback of look and feel.</p>
<div class="section_header">Gussy it up</div>
<p>We opted to let our skills in modern javascript come from doing everything raw, without a framework.  We wrote our own ajax call mechanisms and bumped heads with every browser's ridiculous assumptions about what is or is not a good user experience.  (Later I plan to write about how Firefox is the new IE, and how silly Microsoft was for not playing .wav files.)</p>
<img class="article_img" src="/images/oldbusted-routing.png"/>
Example #3
0
# -*- coding: utf-8 -*-
import os
import dateutil.parser
from pydozer import BlogPost
from slugify import slugify

first_post = BlogPost()

# These are all optional
first_post.data['author'] = 'Tod Hansmann'
first_post.data['tags'] = ['meta', 'about', 'opinions']

# These are not optional at all
first_post.data['posted_date'] = dateutil.parser.parse('Thu Oct 25 15:10:00 MST 2015')
first_post.data['title'] = 'An Intro to TnL'
first_post.data['filename'] = slugify(unicode(first_post.data['title']))
first_post.data['hook'] = """
<p>This place is Tod and Lorna Hansmann's personal writings of some opinionated nature.  The only disclaimer is that we don't plan on censoring anything, though we may change things to clarify something, and we should all keep in mind what time things were written.  If you want to cherry pick something one of us said 10 years in the past, we're probably going to ignore you as that would be a bad position for you to have.</p>
"""
first_post.data['content'] = """
<p>If you have commentary, we don't keep that on the site.  Feel free to drop us an email or hit us up on other various places.  We are busy people, so don't expect a prompt response.  Worry not, your plight is not of a life-threatening nature, and therefor should not be a priority anyway.  Let it go a bit, it will do you good.</p>
<p>Good luck, and <a href="/tnlblog/listing1.doz">read more</a> with wreckless abandon!</p>
"""
Example #4
0
# -*- coding: utf-8 -*-
import os
import dateutil.parser
from pydozer import BlogPost
from slugify import slugify
from nineteen_design import nineteen_design

scalable_tech = BlogPost()

# These are all optional
scalable_tech.data['author'] = 'Tod Hansmann'
scalable_tech.data['tags'] = ['tech', 'architecture', 'opinions']

# These are not optional at all
scalable_tech.data['posted_date'] = dateutil.parser.parse('Wed Oct 28 11:00:00 MST 2015')
scalable_tech.data['title'] = 'Scalable Tech - Tried and True Ways'
scalable_tech.data['filename'] = slugify(unicode(scalable_tech.data['title']))
scalable_tech.data['hook'] = """
<p>A lot of hot media tech likes to talk about "scale" as if it's a new problem.  Like most things in tech, it is not new, and we don't need to use young, hot things to achieve it, and we probably shouldn't.</p>
"""
scalable_tech.data['content'] = """
<p>We believe there's a place for tech like NoSQL databases, asynchronous programming, deployment containers, etc.  We believe we all know what that place is after we have enough experience to understand the tradeoffs involved in such technologies.  This article is not going to illuminate or diminish those places or technologies.  This is about how we can all scale without new things, and without a lot of extra expense in hardware or administration.</p>
<p>A word of caution, this is not likely to be plausible for those of us new to such infrastructure architecture.  That is ok, we were there too and we learned over time.  These are practices that will require us to know what we are interfacing with.  Like anything, there are tradeoffs there.  We will discuss some of those.</p>
<div class="section_header">Data stores</div>
<p>Let's dive in.  A lot of my own frustrations early on in dev was in one arena that seemed very much to be putting up walls to keep me from my data: the dreaded SQL.  It was finicky, exacting, and it was really hard to both develop with and understand the results of.  Joins are a particular mess for most of us, sometimes even after we've been doing SQL work for a long time.</p>
<p>Ultimately this pain lessened, but it is important to note that it still exists.  What I have learned here is that when I need something stable and predictable <i>every time</i> it ran, that SQL has huge advantages, and as tooling has improved (and I have learned of these tools, like the plan evaluation in <a href="http://www.postgresql.org/docs/9.4/static/sql-explain.html">Postgres</a>), this predictability is critical to scaling in ways that matter to us at NSC.</p>
<p>Specifically we gain the ability to make our queries incredibly efficient on fairly large datasets (not "big data" size, but it's the wrong tech for that), and the ability to know exactly when an aberration occurs.  This means development effort is slightly longer up front, but it takes less to maintain, and all the monitoring (and most of the response to problems) can be automated.  Queries take a predictable range of time.  If a query is taking too long, we can fire off a monitoring alert.  A response script can analyze some common problems that might occur (but in practice represent hardware failure or similar critical events), and there are many things that the script can handle and send a followup alert on what it did.</p>
<p>The same can be said of many of our other data stores.  File I/O (and Network I/O) is one of those places programming paradigms typically break down.  When we decided, for <a href="https://phonejanitor.com">Phone Janitor</a>, to store our voicemails as files on a filesystem rather than in the database, this raised some eyebrows from some.  It turns out this means failure in filesystems is a known thing with 50 years of mitigation efforts from the community, and we can build on that.  It greatly simplifies how we handle operations, and most of our mitigations can be done without human intervention (unless a drive dies, of course).  That means we don't have to break our programming patterns as much, because the mechanisms are isolated and we can rely on a lot of history.</p>
<div class="section_header">Modules</div>
<p>Bifurcation of responsibilities seems to help in other areas too.  The voicemail storage is its own system.  The call handling is done on its own system.  The web API is on its own system.  All these systems can exist on their own server, or together, and they just get pointed to the right place to communicate to each other.  Plus, our choice to just use HTTP communications between them means we can load balance for free and handle entire node failures automatically.</p>
<p>Some would call this microservices or something, and the marketing can debate that all they want.  This is not a new concept, and a long time ago it did not require convincing management of its efficiency.  When we all used timeshares on a mainframe from our terminal, this was just a normal requirement.  This command has to run in an isolated way and feed into this other subsystem.  Piping data around was just how we did things for a long time.  That means we know not only how it can be done, but how it can go wrong, which means we can plan for it appropriately.  It just takes more reading, because I never used a mainframe, so I have to learn from those that came before from their storiess.</p>
Example #5
0
# -*- coding: utf-8 -*-
import os
import dateutil.parser
from pydozer import BlogPost
from slugify import slugify
from nineteen_design import nineteen_design

scalable_tech = BlogPost()

# These are all optional
scalable_tech.data['author'] = 'Tod Hansmann'
scalable_tech.data['tags'] = ['tech', 'architecture', 'opinions']

# These are not optional at all
scalable_tech.data['posted_date'] = dateutil.parser.parse(
    'Wed Oct 28 11:00:00 MST 2015')
scalable_tech.data['title'] = 'Scalable Tech - Tried and True Ways'
scalable_tech.data['filename'] = slugify(unicode(scalable_tech.data['title']))
scalable_tech.data['hook'] = """
<p>A lot of hot media tech likes to talk about "scale" as if it's a new problem.  Like most things in tech, it is not new, and we don't need to use young, hot things to achieve it, and we probably shouldn't.</p>
"""
scalable_tech.data['content'] = """
<p>We believe there's a place for tech like NoSQL databases, asynchronous programming, deployment containers, etc.  We believe we all know what that place is after we have enough experience to understand the tradeoffs involved in such technologies.  This article is not going to illuminate or diminish those places or technologies.  This is about how we can all scale without new things, and without a lot of extra expense in hardware or administration.</p>
<p>A word of caution, this is not likely to be plausible for those of us new to such infrastructure architecture.  That is ok, we were there too and we learned over time.  These are practices that will require us to know what we are interfacing with.  Like anything, there are tradeoffs there.  We will discuss some of those.</p>
<div class="section_header">Data stores</div>
<p>Let's dive in.  A lot of my own frustrations early on in dev was in one arena that seemed very much to be putting up walls to keep me from my data: the dreaded SQL.  It was finicky, exacting, and it was really hard to both develop with and understand the results of.  Joins are a particular mess for most of us, sometimes even after we've been doing SQL work for a long time.</p>
<p>Ultimately this pain lessened, but it is important to note that it still exists.  What I have learned here is that when I need something stable and predictable <i>every time</i> it ran, that SQL has huge advantages, and as tooling has improved (and I have learned of these tools, like the plan evaluation in <a href="http://www.postgresql.org/docs/9.4/static/sql-explain.html">Postgres</a>), this predictability is critical to scaling in ways that matter to us at NSC.</p>
<p>Specifically we gain the ability to make our queries incredibly efficient on fairly large datasets (not "big data" size, but it's the wrong tech for that), and the ability to know exactly when an aberration occurs.  This means development effort is slightly longer up front, but it takes less to maintain, and all the monitoring (and most of the response to problems) can be automated.  Queries take a predictable range of time.  If a query is taking too long, we can fire off a monitoring alert.  A response script can analyze some common problems that might occur (but in practice represent hardware failure or similar critical events), and there are many things that the script can handle and send a followup alert on what it did.</p>
<p>The same can be said of many of our other data stores.  File I/O (and Network I/O) is one of those places programming paradigms typically break down.  When we decided, for <a href="https://phonejanitor.com">Phone Janitor</a>, to store our voicemails as files on a filesystem rather than in the database, this raised some eyebrows from some.  It turns out this means failure in filesystems is a known thing with 50 years of mitigation efforts from the community, and we can build on that.  It greatly simplifies how we handle operations, and most of our mitigations can be done without human intervention (unless a drive dies, of course).  That means we don't have to break our programming patterns as much, because the mechanisms are isolated and we can rely on a lot of history.</p>
<div class="section_header">Modules</div>
<p>Bifurcation of responsibilities seems to help in other areas too.  The voicemail storage is its own system.  The call handling is done on its own system.  The web API is on its own system.  All these systems can exist on their own server, or together, and they just get pointed to the right place to communicate to each other.  Plus, our choice to just use HTTP communications between them means we can load balance for free and handle entire node failures automatically.</p>
# -*- coding: utf-8 -*-
import os
import dateutil.parser
from pydozer import BlogPost
from slugify import slugify

a_post = BlogPost()

# These are all optional
a_post.data['author'] = 'Not Tod'
a_post.data['tags'] = ['testing', 'generated', 'fourth']

# These are not optional at all
a_post.data['posted_date'] = dateutil.parser.parse('Sun Sep 28 10:36:28 MST 2003')
a_post.data['title'] = 'This is the fourth post'
a_post.data['filename'] = slugify(unicode(a_post.data['title']))
a_post.data['content'] = """
<p>I'm reusing the object's name because it absolutely does not matter.</p>
"""
Example #7
0
# -*- coding: utf-8 -*-
import os
import dateutil.parser
from pydozer import BlogPost
from slugify import slugify

first_post = BlogPost()

# These are all optional
first_post.data['author'] = 'Tod Hansmann'
first_post.data['tags'] = ['first', 'testing', 'generated']
first_post.data['never_used'] = 'This is never actually used, but I can throw it in for later use if I want'

# These are not optional at all
first_post.data['posted_date'] = dateutil.parser.parse('Thu Sep 25 10:36:28 MST 2003')
first_post.data['title'] = 'Test Blog Up and Running'
first_post.data['filename'] = slugify(unicode(first_post.data['title']))
first_post.data['content'] = """
<p>This should be our first post.  I could technically generate this or manipulate it after the fact, or use part of it, or mix it up <b>with HTML</b> if I want.</p>
<p>Really anything I want, as complex as I want, anywhere in python's vast libraries, or just leave it simple.</p>
<p>For instance we're using slugify to make SEO friendly URLs, but that's not required.  Just whatever will give us python datetimes for posted_dates and a filename</p>
"""
Example #8
0
# -*- coding: utf-8 -*-
import os
import dateutil.parser
from pydozer import BlogPost
from slugify import slugify

nineteen_design = BlogPost()

# These are all optional
nineteen_design.data['author'] = 'Tod Hansmann'
nineteen_design.data['tags'] = ['tech', 'design', 'architecture', 'opinions']

# These are not optional at all
nineteen_design.data['posted_date'] = dateutil.parser.parse(
    'Wed Oct 21 14:00:00 MST 2015')
nineteen_design.data['title'] = '1995 Oriented Design'
nineteen_design.data['filename'] = slugify(
    unicode(nineteen_design.data['title']))
nineteen_design.data['hook'] = """
<p>We are not designers here at NSC.  We have an artist, but not a web designer of any kind, so how do you design a UI when all you have is really good engineers and no idea what "pretty" is outside your world?</p>
"""
nineteen_design.data['content'] = """
<p>We started making <a href="https://phonejanitor.com">Phone Janitor</a> in June.  We had the idea a long time ago, but it was time to put it into action.  It was simple, give people the ability to make all their annoying calls go away by controlling who could get through to them, and how.</p>
<p>The only problem is we had (and still have) no idea what makes a pretty UI, and had to start somewhere.  We decided to start by prototyping user interaction flow.  This meant we needed to be able to iterate, have it usable, but we weren't going to be able to do it pretty.  We aren't really web devs by trade, more systems developers.</p>
<div class="section_header">Introducing the 90s, again for the first time</div>
<img class="article_img" src="/images/early-prototype.png"/>
<div class="caption">Our never published UI.</div>
<p>We started with what we knew really well: Geocities level web design.  This style used marquees, crappy javascript counters, iframes, and tables.  It was never meant to be seen by actual customers using the thing, so we got to be a bit silly and threw it together in an afternoon.  This had two benefits.</p>
<p>First, we could do it really fast and didn't have to focus on how it looked but how it performed.  Second, we didn't have to spend a lot of time learning a new tech stack just to get a prototype.  The old web still does indeed work on modern browsers (though it's slower somehow).</p>
<p>We had a working prototype of the frontend in about 3 days, all inclusive and that was with several iterations and tests with friends (who are not devs, and certainly didn't use the web back then).  This separated feedback about function from feedback of look and feel.</p>
<div class="section_header">Gussy it up</div>
Example #9
0
# -*- coding: utf-8 -*-
import os
import dateutil.parser
from pydozer import BlogPost
from slugify import slugify

a_post = BlogPost()

# These are all optional
a_post.data['author'] = 'Tod Hansmann'
a_post.data['tags'] = ['generated', 'third']

# These are not optional at all
a_post.data['posted_date'] = dateutil.parser.parse('Sun Sep 28 9:36:28 MST 2003')
a_post.data['title'] = 'This is the third post'
a_post.data['filename'] = slugify(unicode(a_post.data['title']))
a_post.data['content'] = """
<p>Third post, hopefully an hour ahead of the fourth.</p>
"""