I'm at the borderline of obsessed with Elixir and Phoenix lately. I've avoided writing about it so far because it feels a bit too good to be true. In an effort to test my own enthusiam, I decided to rebuild this site with them in the most ridiculous way possible just to try to test some limits. Because I already have an unhealthy obsession with PostgreSQL, we're getting crazy with it too.
DISCLAIMER: This is not a "how to build a blog" article. If you do what I'm about to do, people will look at you funny (and probably should).
// Share.on([hacker news,
linkedin,
twitter,
facebook,
reddit])
To start you need a bit of background. The old version of this site ran on CakePHP 1.2 / PHP 5 and MySQL on my old company server. It was based on a platform that we'd built called The Intersect (we were/are huge fans of Chuck) that we used for all of our client projects.
That system let us design our database first according to our client's needs while following the CakePHP naming rules as well as some that we added. It would generate an entire AJAX (SPA for you youngsters) administrative backend for them complete with granular ACL based permission controls, user management, automatic 301 redirects, sitemaps, feeds, asset minification, accessibility considerations that would adjust based on who used it. We could also tailor its appearance and behavior to white label projects for design firms.
It was a solid system that we were very proud of but we pushed a lot of the limits of what you can do with a framework in PHP to create it. Lots of caching was necessary for the backend with all that going on. For most sites we were able to statically cache the entire frontend, simply serving it up directly via nginx. That's how this site used to work. I've tinkered with rebuilding this site for years since Brightball stopped doing contract work but just never came across something I really wanted to rebuild it with...until now.
Over the years I've considered rebuilding this site with Ruby on Rails, Catalyst for Perl, Groovy on Grails, PHP with Laravel / Lithium / raw PHP with routes in nginx, and most recently Go with Gin Gonic or Hugo. I ended up with reasons that I didn't do the others ranging from hosting cost to productivity. Go was my most recent attempt and while I really like Go as a language, Ruby has effectively ruined me for language productivity expectations. Go is just a little too low level for full stack web productivity, even with a framework. Great for APIs, system level and data processing...just not full stack web. Templating reminded me of JSP. When I started looking at Hugo to just statically generate it that was me getting close to just moving on. I like having this site around as an outlet to write, play and experiment so about halfway through that process I stopped.
I first heard about Elixir at RailsConf 2014 in Chicago. Some guy named Chris McCord was there doing an all day session on it called All Abord the Elixir Express. I somewhat wondered what it was about. Even walked in for a bit in the middle of it after I'd left a session that wasn't as interesting as I'd hoped, but I didn't stay long enough to really know what I was seeing. As far as I could tell at the time it was just some language that generated another language. I knew next to nothing about it but I had an extreme bias at this point against learning niche languages that were just reinventing something else that I'd probably never use...
Holy Lord, was I wrong. I haven't been this excited about programming itself, much less "a language" since the web finally clicked years ago and I understood what you could do with it. I will expound on that in detail another time, but suffice it to say that I'd hit a point where I didn't think that something was ever going to bridge the productivity, performance, architecture and enjoyment gap. I figured you'd always have to trade one for the other and from the business side, that was going to mean Ruby to get you a polished product to market quickly and then rebuild the bottlenecks in something faster once you have plenty of paying customers, in most cases. Elixir and Phoenix virtually remove the tradeoff.
Now that I've got my excitement out of the way, on to the rebuild. I wanted something I could experiment with around the entire stack for no reason other that personal enjoyment and then write about it. Here were the core parts for the minimum viable port.
You're probably looking at that wondering why anyone would want database triggers and direct DB access for their blog. See the disclaimer. You were warned. These are not normal or even rational architecture decisions for most people, but the customer is me and the customer is always right...so take that, logic. Also Postgres isn't exactly "just a database" either.
Now, this isn't a tutorial so I'm going to highlight some of the key things that I found interesting or struggled with while working on this initial rebuild and go from there. We'll close out with benchmarks because everybody loves benchmarks. I'll probably end up writing subsequent articles about other OTP feature experiments, performance tuning, etc as I do them. For this initial build, performance tuning was barely on my radar.
Navicat Premium has a handy tool in it called Data Transfer that will let you transfer data and tables from one database to another, even if they aren't the same type of database. To kick things off, I used this to get my data copied into to Postgres, renamed the tables and then started generating my new database structure with Ecto migrations. You see, when I read Programming Phoenix I came across a passage that made me do a triple take...and if you've spent any amount of time around web frameworks you'll do the same when you read this.
...Ecto allows developers to enjoy many of the guarantees databases offer in terms of data integrity. In fact, Ecto rewards developers for doing exactly this, both in the short term, by transforming constraint errors into user feedback, and in the long term by guaranteeing you won’t be awake at 3:00 a.m. fixing bugs caused by inconsistent data...(Programming Phoenix, Chapter 7, Page 122)
That excerpt was after several pages of the ins and outs of different database strategies ranging from "do everything in the application" to "do everything in the database" extremes. I've attempted to preach this gospel of balance for years due to living through both extremes. I even taught a class on it because what I was seeing bothered me so much. The only thing the extremes get you is lock in and a lot of problems that are now suddenly much more difficult to correct.
McCord isn't kidding either. For example, here is an Ecto migration and the code that comes from it:
create table(:articles_tags) do add :article_id, references(:articles, on_delete: :delete_all) add :tag_id, references(:tags, on_delete: :delete_all) end
Generates proper foreign keys...
CREATE TABLE "public"."articles_tags" ( "id" int4 NOT NULL DEFAULT nextval('articles_tags_id_seq'::regclass), "article_id" int4, "tag_id" int4, CONSTRAINT "articles_tags_pkey" PRIMARY KEY ("id") NOT DEFERRABLE INITIALLY IMMEDIATE, CONSTRAINT "articles_tags_article_id_fkey" FOREIGN KEY ("article_id") REFERENCES "public"."articles" ("id") ON UPDATE NO ACTION ON DELETE CASCADE NOT DEFERRABLE INITIALLY IMMEDIATE, CONSTRAINT "articles_tags_tag_id_fkey" FOREIGN KEY ("tag_id") REFERENCES "public"."tags" ("id") ON UPDATE NO ACTION ON DELETE CASCADE NOT DEFERRABLE INITIALLY IMMEDIATE ) WITH (OIDS=FALSE);
If you're not familiar with what that will do it's simply this...if I try to create a record in this table with an article_id that doesn't match an id in the articles table...there will be an error. Ditto with tag_id. If I delete a tag from the tags table, all of the dead references in this table will go *poof* with it. You can alter that behavior to your liking. I've explicitly requested it the deletion behavior here. By virtue of simply using Ecto migrations with the references words, I've largely handled database enforcement of referential integrity.
The details of this aren't that interesting, suffice it to say I made the move using a flag in my tags table that defines whether or not a tag should be displayed. These are the tags I really care about that show up in the site footer and on the list pages. There are others that will also appear when you view an article or in search results. The biggest shift here is that I wanted tags to be easy for me to manage per-article from a database tool without having to manually add them to a tags table and load in the references.
To accomplish that I added a tags field to my articles table with a Postgres Array type like so...
create table(:articles) do ... add :tags, {:array, :string} end # creates: "tags" varchar(255)[],
That creates an array consisting of strings which I can edit as easy as this:
{Elixir,Phoenix,PostgreSQL}
But how do I translate that into my tags table? A database trigger! I created a trigger that fires whenever that field changes and rebuilds my tag associations. I'll get into the details of the function itself in a subsequent post because it's currently not efficient and the next post will fix that, but the trigger just looks like:
CREATE TRIGGER "article_tag_associations" AFTER DELETE OR INSERT OR UPDATE OF "tags" ON "public"."articles" FOR EACH ROW EXECUTE PROCEDURE "rebuild_tag_associations_trigger"();
The reason it's not efficient is because the rebuild_tag_associations_trigger() function blows away the entire article_tags table for every article and rebuilds it based on all of the article arrays. It can simply handle the diff of what changed on that particular record and we'll do that in the next article. The other part that goes into this is how we reference the tags without knowing their ids. For that and the article slugs, we create a simple slugify function for the database that was largely inspired from this slugify() for Posgres post but shortened for my needs to this function which is also fired by trigger on article INSERT.
CREATE OR REPLACE FUNCTION slugify(value TEXT) RETURNS text AS $$ SELECT regexp_replace(trim(regexp_replace(lower(unaccent(value)), E'[^\w\s-]', '', 'gi')), E'[-\s]+', '-', 'gi'); $$ LANGUAGE SQL STRICT IMMUTABLE
Now that we've got the gist of how I can update the site with simple edits to the articles table, we need to get people to it. Here we get to play with pipelines. If you aren't familiar with Phoenix routes, it has a concept of pipelines. Coming from other stacks you may be used to a middleware layer in your application that works in front of all of your requests such as Rack or otherwise.
Pipeline lets you define middleware stacks and apply them to specific routes. You can even chain the pipelines together. By default it comes with pipelines for browser and api
pipeline :api do plug :accepts, ["json"] end pipeline :browser do plug :accepts, ["html"] plug :fetch_session plug :fetch_flash plug :protect_from_forgery plug :put_secure_browser_headers end
But we're going to add one for content, since I don't really need all of the session related stuff just to to let you read my articles. We'll also add one for crawlers and feed readers.
pipeline :content do plug :accepts, ["html"] plug :put_secure_browser_headers end pipeline :crawler do plug :accepts, ["xml"] end scope "/", Brightball do pipe_through :content get "/", ArticleController, :index get "/articles/:slug", ArticleController, :show get "/tag/:slug", ArticleController, :tag end
This is one of the areas where I hit a road block. The sitemap.xml route and main articles.rss route were simple enough, but I kept getting a compiler error whenever I tried to structure my RSS url per tag as "/:slug.rss". I tried a number of things and I couldn't avoid a compiler error here so I ended up just changing the URL. I'm sure there's a way to do it, I just haven't found a good example yet although I've been encouraged to look at _format from a gentleman on the Elixir Slack. Here's what I ended up with...
scope "/", Brightball do pipe_through :crawler get "/sitemap.xml", ArticleController, :sitemap get "/articles.rss", ArticleController, :feed get "/tag/:slug/rss", ArticleController, :feed end
If you're looking at that and wondering why two paths are going to the same function, I'll tell you. Elixir uses pattern matching everywhere, even on function calls. That allows me to define two feed functions like so....
def feed(conn, %{ "slug" => slug }) do ... end def feed(conn, _params) do ... end
And Elixir knows that because my function was passed a hash that contained a "slug" key, I want that method and I want the slug loaded into a variable that I can immediately work with. If your head just exploded, I suggest this lovely Programing Elixir book.
My previous site used a URL structure of /category-slug/article-slug resulting in things like /ruby/learning-ruby-on-rails. With the switch to tags the only way to keep that structure would be to define primary tags but then the routing gets complicated because you have to access a list of category slugs in the routing layer or it will get overzealous and expensive. Instead I opted for simplification to the /articles/:slug structure. To handle URL redirects we just add a simple catch all route at the end to handle anything that we missed the first time around.
scope "/", Brightball do get "/*path", ArticleUrlController, :show end
Ideally, instead of acting as a route we'll be able to recognize when a 404 is being passed back and intercept it to verify that it's not just an old URL. This just works if it missed the route entirely, which is good enough for my immediate needs but we'll refine later with a Plug.
Once the redirects were in place, we needed to head over to our Disqus admin tool and tell it to recrawl the site to read the redirects and move all of the previous story comments.
I started to put this into my code, but I didn't want to run a check for the root domain on every single request with a plug. My initial deployment of this site is to Heroku and on Googling I discovered that a www redirect is very overcomplicated for what it does. Luckily for me, I use DNS Made Easy (for a lot of reasons that I'll cover later) but one of handy built in features is an HTTP redirection record so I can just set it there with a couple of clicks.
$ curl --head http://brightball.com HTTP/1.1 302 Found Date: Wed, 06 Jul 2016 04:18:22 GMT Content-Length: 0 Connection: close Location: http://www.brightball.com Server: DNSME HTTP Redirection
There's nothing particularly fancy about sitemaps and rss feeds, but probably the most useful thing I can pass along was using the Timex library to format dates.
defmodule Brightball.ArticleView do use Brightball.Web, :view use Timex def sitemap_date_format(%Ecto.Date{} = date) do date |> Ecto.Date.to_erl |> Timex.Date.from |> Timex.format!("%Y-%m-%d", :strftime) end def sitemap_date_format(%Ecto.DateTime{} = date) do date |> Ecto.DateTime.to_erl |> Timex.Date.from |> Timex.format!("%Y-%m-%d", :strftime) end def rss_datetime_format(%Ecto.DateTime{} = date) do date |> Ecto.DateTime.to_erl |> Timex.DateTime.from |> Timex.format!("%a, %d %b %Y %H:%M:%S %z", :strftime) end end
This uses pattern matching in the argument to know whether I'm passing in a Date or a Datetime and handle it appropriately with the sitemap. Sitemap dates are supposed to be in W3C Datetime format while RSS feed datetimes are supposed to be in RFC 2822.
Also, if you're new to Elixir and wondering what that "|>" does, it's called a pipe operator and it simply takes the preceding value and passes it as the first argument to the following function. Our sitemap_date_format function could be rewritten like this to do the same thing...
def sitemap_date_format(%Ecto.DateTime{} = date) do Timex.format!(Timex.Date.from(Ecto.DateTime.to_erl(date)),"%Y-%m-%d", :strftime) end
When I first started up the with the Timex library installed, I saw something interesting in my logs too:
[debug] Tzdata polling for update. [debug] Tzdata downloading new data from https://www.iana.org/time-zones/repository/tzdata-latest.tar.gz [debug] Tzdata data downloaded. Release version 2016f. [info] Tzdata has updated the release from 2016e to 2016f [debug] Tzdata deleting ETS table for version 2016e [debug] Tzdata deleting ETS table file for version 2016e
That is the Timex library downloading fresh timezone data and storing it in an in-memory ETS table. We'll be covering that more when we get to performance tuning.
I also discovered that when using the route helpers like article_url to get the full url instead of article_path to get the relative, that it uses the url: [host: "example.com"] from your environment config file. I'd initially left that blank when I was getting setup with Heroku, but you have to fill it in for that.
Also, the phoenix_html_sanitzer library comes in handy with the RSS feed to strip tags out of your content depending on your needs.
sanitize(article.introduction, :strip_tags)
Contact Us is fairly straightforward so I'm just going to mention what's probably different about mine. I wanted it to send me an email whenever somebody filled it out but I've worked on enough projects where the email system was coupled to tightly to the application that I didn't want to replicate that here. Ideally, sending email and managing email templates would be a separate microservice...which I didn't feel like writing.
Instead I opted to go with a nifty little tool called sendwithus that let's you manage templating and analytics from their tool while hooking into Sendgrid / Postmark / Sparkpost / Mailgun / your SMTP server to actually handle deliveries. All I have to do to send once the validations pass is make a simple REST call. The templates themselves are managed totally outside of my application.
{ "template": "tem_TEMPLATE_ID", "recipient": { "name": "RECIPIENT NAME", "address": "RECIPIENT EMAIL" }, "template_data": { "message": "#{contact.message}", "company_name": "#{contact.company_name}", "contact_phone": "#{contact.contact_phone}", "email_address": "#{contact.email_address}", "first_name": "#{contact.first_name}", "last_name": "#{contact.last_name}" }, "sender": { "name": "SENDER NAME", "address": "SENDER ADDRESS", "reply_to": "#{contact.email_address}" }, "locale": "en-US", "esp_account": "esp_SMTP_DELIVERY_ID" }
While setting this up I came across this email_checker library for Elixir that provides 3 different mechanisms for validation, which you can call individually or as a group. The mechanisms are Format, MX and SMTP.
In a total, premature optimization move I decided that I didn't really want all of my various article pages to waste RAM on huge variables that didn't need to be used. For example, on a list page where I display the title and introductions for many articles, I don't need to retrieve the giant main body just to stick it in an unused variable. To get around this I could either modify all of my list-related queries, which would be tedious OR I could just create a second Article model that had only the fields I cared about for those pages, like my ArticleList here...
articles = Repo.all( from c in ArticleList, where: not is_nil(c.publish_at) and c.static == false, order_by: [desc: c.display_date, desc: c.publish_at], preload: [:tags]) defmodule Brightball.ArticleList do use Brightball.Web, :model schema "articles" do field :title, :string field :slug, :string field :introduction, :string ... end end
I don't want to spend any real time on the view layer in this article but the summary is that it's the most well thought out, efficient and flexible approach to a view layer in a framework that I've seen...and I'll leave it at that until I can dedicate a blog post to its glory.
Lastly I setup a site search using PostgreSQL's full text search, which you can try in that upper right hand corner. This is either simple or tricky depending on how complicated we want to get. If you want to keep things reasonably simple, you can just use Ecto fragments in the where and order_by clauses like so:
articles = Repo.all( from c in ArticleList, where: not is_nil(c.publish_at) and c.static == false and fragment("search_vector @@ plainto_tsquery(?)", ^term), order_by: fragment("ts_rank(search_vector, plainto_tsquery(?)) DESC", ^term), preload: [:tags])
This just uses a search_vector column that I've defined with all of the weighted fields that go into the search and search for the ts_query that the user types in. The plainto_tsquery function is for convenience sake to filter out punctuation. Then we're using ts_rank to sort the results by relevance.
Fairly straightforward...but we don't do things straightforward on this site. Let's get complicated!
I wanted to display the relevance to end users you people and in order to make that relevance clear, we need to show an excerpt that the search picked up so that users would understand exactly why those things showed up. Also, we need to be able to account for things people might search for expecting a different result. For example, searching for "postgres" gives me different results than "postgresql" or "pg".
To solve the last problem PostgreSQL will let you setup a thesaurus or use a ts_rewrite that can transform as search query into several other potential searches. We're going to use the latter as it's simpler and doesn't require a full reindex to work. To do this we'd create an aliases table with two ts_query fields, one of which is the primary key and the other of which contains words that a user might mean. We pass in a SELECT statement as the second argument to our ts_rewrite with the original query as our first argument and it will look something like this.
ts_rewrite(plainto_tsquery(?), 'SELECT * FROM aliases')
One problem with a statement like that is that we don't want to keep repeating it all over the place. Our ranking function needs that in it as does our headline function to snag an excerpt. Normally with PostgreSQL you'd write it once as part of your from statement and then reference it by name like so:
FROM articles, ts_rewrite(plainto_tsquery(?), 'SELECT * FROM aliases') search_query WHERE articles.search_vector @@ search_query
I couldn't figure out how to do that with Ecto though but luckily there's a simple workaround of using an INNER JOIN since this is part of our WHERE clause anyway.
join: sq in fragment("(SELECT ts_rewrite(plainto_tsquery(?), 'SELECT * FROM aliases') search_query)", ^term), on: fragment("search_vector @@ ?", sq.search_query),
We're going to be including some very specific results in this query that doesn't really fit our schema at all, so to make this work we're using an Ecto schemaless query that we can largely use in the same way on the search results. The final result ended up looking like this:
articles = Repo.all( from c in "articles", join: sq in fragment("(SELECT ts_rewrite(plainto_tsquery(?), 'SELECT * FROM aliases') search_query)", ^term), on: fragment("search_vector @@ ?", sq.search_query), where: not is_nil(c.publish_at) and c.static == false, order_by: fragment("rank DESC"), select: %{ id: c.id, title: c.title, slug: c.slug, display_date: c.display_date, rank: fragment("ts_rank_cd(search_vector, ?, 32) AS rank", sq.search_query), introduction: fragment("ts_headline('english',CONCAT(introduction,' ',main_body), search_query, 'StartSel=<mark>,StopSel=</mark>,MinWords=50,MaxWords=100') AS headline"), tags: c.tags })
You can see we're including the rank in the SELECT and referencing it within the fragment in the ORDER BY to avoid duplication. The introduction is using the ts_headline function with some details about what we want to pull the headline from. The title and tags will already be clearly visible to an end user so we're only pulling a potential text blurb from the combined introduction and main_body fields, using our search query that we've referenced earlier. The ts_headline function has a number of options that we can pass in as well. By default, it will highlight matches in the excerpt with <b> tags but we're going to be a little more semantically proper and replace it with <mark> tags instead. We're also going to control the minimum and maximum about of words in the blurb to keep things looking right on the page.
StartSel=<mark>,StopSel=</mark>,MinWords=50,MaxWords=100
For a good example here, you can search for "pg" in the search box and you'll see the results come back with matches and highlighting around "pg", "postgres" and "postgresql".
We didn't forget about the benchmarks either. Now, these are extremely unscientific benchmarks of the old static site compared to the new dynamic site but I wanted to at least have a baseline before I spend time performance tuning. As a realistic measure, I opted to hit the no such thing as "real programming" article that recently had some circulation. It involves a decently large HTML payload, a primary page query and four other queries to populate the sidebars and footer. Benchmarks are performed from my laptop across the tubes on a 60mb connection.
My old company server was a 4 core, 4gb of RAM legacy Slicehost server that was migrated to a Rackspace Cloud Server post aquisition. Here are the summarized benchmarks.
Document Length: 30196 bytes $ ab -c 100 -n 1000 http://www.brightball.com/development/no-such-thing-as-real-programming Time taken for tests: 4.427 seconds Complete requests: 1000 Requests per second: 225.88 [#/sec] (mean) Time per request: 4.427 [ms] (mean, across all concurrent requests) $ ab -c 1000 -n 1000 http://www.brightball.com/development/no-such-thing-as-real-programming Time taken for tests: 33.381 seconds Complete requests: 1000 Requests per second: 29.96 [#/sec] (mean) Time per request: 33.381 [ms] (mean, across all concurrent requests)
I'm told I need to increase the nginx worker process on the server to from 1024 to something higher, however, since the site lived with those settings in reality for the last decade or so I'm not going to adjust it so that it reflects the true baseline.
Since I don't know if or when I'll get around to configuring a new server for this site, we're currently running on a Heroku Hobby Dyno (512mb RAM, 1 core) with a Heroku PostgreSQL Free Dev, shared database plan. The HTML is slightly larger as well.
Document Length: 34241 bytes $ ab -c 100 -n 1000 http://www.brightball.com/articles/no-such-thing-as-real-programming Time taken for tests: 6.754 seconds Complete requests: 1000 Requests per second: 148.05 [#/sec] (mean) Time per request: 6.754 [ms] (mean, across all concurrent requests) $ ab -c 1000 -n 1000 http://www.brightball.com/articles/no-such-thing-as-real-programming Time taken for tests: 10.909 seconds Complete requests: 1000 Requests per second: 91.67 [#/sec] (mean) Time per request: 10.909 [ms] (mean, across all concurrent requests)
You are reading that correctly. Under more manageable load of 100 concurrent requests static nginx is outperforming dynamic Elixir by about 2.3 [ms]. Under higher load of 1000 concurrent requests, Elixir is performing 3 times faster and we haven't even done any performance tuning yet.
I'm very happy with everything I'm seeing from my new site and looking forward to diving farther in as I learn more about what these tools can do. This was something of a broad scope so future articles will make more of an effort focus. Hope y'all enjoyed it!
// Share.on([hacker news,
linkedin,
twitter,
facebook,
reddit])