Scanning files with ClamAV inside a dockerized node application

This is a setup guide for a simple upload server with ClamAV virus scanning function.

The sample application is written in Typescript but it also work on any NodeJS application.

Dockerfile

This Dockerfile install virus scanner ClamAV and supervisor also it is based on node. If your application is Java or any other languages, you can replace the FROM line.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
FROM node:8
MAINTAINER Yeung Yiu Hung <hkclex@gmail.com>

# Install ClamAV and supervisord
# Debian Base to use
ENV DEBIAN_VERSION jessie

# Install ClamAV and supervisor
RUN echo "deb http://http.debian.net/debian/ $DEBIAN_VERSION main contrib non-free" > /etc/apt/sources.list && \
echo "deb http://http.debian.net/debian/ $DEBIAN_VERSION-updates main contrib non-free" >> /etc/apt/sources.list && \
echo "deb http://security.debian.org/ $DEBIAN_VERSION/updates main contrib non-free" >> /etc/apt/sources.list && \
apt-get update && \
DEBIAN_FRONTEND=noninteractive apt-get install --no-install-recommends -y -qq \
clamav-daemon \
clamav-freshclam \
libclamunrar7 \
supervisor \
wget && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*


# Initial update of av databases
RUN wget -O /var/lib/clamav/main.cvd http://database.clamav.net/main.cvd && \
wget -O /var/lib/clamav/daily.cvd http://database.clamav.net/daily.cvd && \
wget -O /var/lib/clamav/bytecode.cvd http://database.clamav.net/bytecode.cvd && \
chown clamav:clamav /var/lib/clamav/*.cvd

# Update Permission
RUN mkdir /var/run/clamav && \
chown clamav:clamav /var/run/clamav && \
chmod 750 /var/run/clamav

# Configuration update
RUN sed -i 's/^Foreground .*$/Foreground true/g' /etc/clamav/clamd.conf && \
echo "TCPSocket 3310" >> /etc/clamav/clamd.conf && \
sed -i 's/^Foreground .*$/Foreground true/g' /etc/clamav/freshclam.conf

# Volume provision
VOLUME ["/var/lib/clamav"]

WORKDIR /server

COPY . /server
RUN npm install && npm run postinstall

# Copy supervisor config
COPY ./configs/supervisord.conf /etc/supervisor/conf.d/supervisord-nodejs.conf

EXPOSE 3000
CMD ["/usr/bin/supervisord", "-n"]

supervisor.conf

supervisor is a service that let you run multiple service at once. In our case we need freshclam for updating virus database, clamd as our virus scanner and node for our server

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
[supervisord]
nodaemon=true


[program:clamd]
directory=/
command=clamd &
autostart=true
autorestart=true
stderr_logfile=/var/log/supervisor/%(program_name)s.log
stdout_logfile=/dev/fd/1
stdout_logfile_maxbytes=0
redirect_stderr=true

[program:freshclam]
directory=/
command=freshclam -d
autostart=true
autorestart=true
stderr_logfile=/var/log/supervisor/%(program_name)s.log
stdout_logfile=/dev/fd/1
stdout_logfile_maxbytes=0
redirect_stderr=true

[program:fileservice]
directory=/server
command=npm run debug
autostart=true
autorestart=true
stderr_logfile=/var/log/supervisor/%(program_name)s.log
stdout_logfile=/dev/fd/1
stdout_logfile_maxbytes=0
redirect_stderr=true

Server Code

Here is a simple upload server in Typescript

You will need:

  • multer
  • clamav.js
  • express
1
npm i multer clamav.js express --save
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
import express from "express";

import multer = require("multer");
import * as stream from "stream";

import clamav from "clamav.js";
import { RequestHandler } from "express-serve-static-core";
import { read } from "fs";

var upload = multer({
storage: multer.memoryStorage()
});

export class Application {
private app = express();

version(): string {
return "1.0";
}

start() {
console.log("Application Started");
this.app.post("/photos/upload", upload.single("photo"), function(
req,
res,
next
) {
const readStream = new stream.Readable();
readStream.push(req.file.buffer);
readStream.push(null);
clamav
.createScanner(3310, "127.0.0.1")
.scan(readStream, function(err, object, malicious) {
if (err) {
console.log(object.path + ": " + err);
next(err);
} else if (malicious) {
console.log(object.path + ": " + malicious + " FOUND");
next(new Error("Virus Detected"));
} else {
console.log(object.path + ": OK");
res.send("OK");
}
});
});

this.app.use(function(err, req, res, next) {
if (err !== null) {
console.log(err);
res.send({ result: "fail", error: err.message });
} else {
next();
}
});

this.app.listen(3000, function() {
console.log("Server listening on port 3000");
});
}

stop(): boolean {
return true;
}
}

const app = new Application();
app.start();

Testing it

You can upload a file to /photos/upload with this content to test the virus scanner

1
X5O!P%@AP[4\PZX54(P^)7CC)7}$EICAR-STANDARD-ANTIVIRUS-TEST-FILE!$H+H*

You can find the complete project here: https://github.com/darkcl/simple-upload

Share Comments

Testing API against your API documentation within docker

This article will show you how to setup testing environment within a docker network. We will be using dredd as our testing tools.

Your API documentation

Here is a sample API Blueprint document

1
2
3
4
5
6
7
8
9
10
11
12
13
FORMAT: 1A

# Tsum Tsum Status

## Group Health Check

### Ping [GET /ping]

+ Response 200 (application/json; charset=utf-8)

+ Body

{"message": "OK"}

Simple Response with express

We will create a simple express application

1
2
3
4
5
6
7
8
9
import express from "express";

app = express();

app.get("/ping", function(req, res) {
res.json({ message: "OK" });
});

app.listen(3000);

Setup dredd.yml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
dry-run: null
hookfiles: null
language: nodejs
sandbox: false
server: null
server-wait: 3
init: false
custom: {}
names: false
only: []
reporter: [html]
output: [./report/index.html]
header: []
sorted: false
user: null
inline-errors: false
details: false
method: []
color: true
level: info
timestamp: false
silent: false
path: []
hooks-worker-timeout: 5000
hooks-worker-connect-timeout: 1500
hooks-worker-connect-retry: 500
hooks-worker-after-connect-wait: 100
hooks-worker-term-timeout: 5000
hooks-worker-term-retry: 500
hooks-worker-handler-host: 127.0.0.1
hooks-worker-handler-port: 61321
config: ./dredd.yml
blueprint: ./docs/spec.apib
endpoint: 'http://api:3000'

Noted that our endpoint is http://api:3000 since we are testing with a docker network.

Setup docker-compose.itest.yml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
version: '3'
services:
api:
build: .
command: npm run debug
volumes:
- './dist:/server/dist'
itest:
image: apiaryio/dredd
volumes:
- './docs:/docs'
- './dredd.yml:/dredd.yml'
- './report:/report'
depends_on:
- api

Setup Makefile

1
2
3
4
5
6
7
8
.PHONY: test

test:
@echo Start testing
@docker-compose -f ./docker-compose.itest.yml build
@docker-compose -f ./docker-compose.itest.yml up -d api
@docker-compose -f ./docker-compose.itest.yml up --exit-code-from itest itest
@docker-compose -f ./docker-compose.itest.yml down

Run Test

1
make test

Conclusion

We can expand this setup to add mongodb, mysql or any other database.

Since we are testing in docker, each test will be clean.

Furthermore, you ensure your API document is verbose and accurate enough or dredd to run tests.

Share Comments

VSCode as Typescript IDE

This is a setup guide for using VSCode as an IDE for Typescript development.

This article will include what plugin to use for code coverage report, auto code format and hot reload for typescirpt unit tests and application.

VSCode Plugins

You will need these plugins for code coverage report and auto code formatting.

  • Coverage Gutter
  • Prettier

Nodejs

In the project folder, run these commands

1
2
3
4
5
6
7
8
9
10
11
12

npm init

mkdir app # Application folder, store all typescripts

mkdir test # Test folder, store all test scripts

gi node > .gitignore # gitignore from https://www.gitignore.io/

npm i typescript # Typescript compiler

npm i mocha nyc @types/mocha @types/nyc ts-node nodemon -D # Unit Tests and hot reload

package.json

Edit scripts Section

1
2
3
4
5
6
7
8
9
10
11
"scripts": {
"postinstall": "tsc -p .",
"watch": "tsc -w -p .",
"debug":
"nodemon --watch ./dist --inspect=0.0.0.0:5858 --nolazy ./dist/index.js",
"docker-debug": "docker-compose up",
"test": "nyc mocha",
"watch-test":
"nodemon -e ts --watch package.json ./app ./test --exec 'npm test'",
"start": "node ./dist/index.js"
}

Add nyc section

1
2
3
4
5
6
7
8
9
"nyc": {
"include": ["./app/**.*"],
"extension": [".ts", ".tsx"],
"require": ["ts-node/register"],
"reporter": ["text-summary", "html", "lcov"],
"sourceMap": true,
"instrument": true,
"all": true
}

mocha.opts

Add mocha.opts in ./test

1
2
3
4
--require ts-node/register
--require source-map-support/register
--recursive
test/**/*.tests.ts

Docker Settings

Dockerfile

1
2
3
4
5
6
7
8
9
FROM node:8-slim

WORKDIR /server

COPY . /server
RUN npm install

EXPOSE 3000
CMD [ "npm", "start" ]

docker-compose.yml

1
2
3
4
5
6
7
8
9
10
version: '2'
services:
web:
build: .
command: npm run debug
volumes:
- './dist:/server/dist'
ports:
- '3000:3000'
- '5858:5858'

Typescript Settings

settings.json

1
2
3
{
"editor.formatOnSave": true
}

launch.json

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
{
"version": "0.2.0",
"configurations": [
{
"type": "node",
"request": "launch",
"name": "Launch in Docker",
"preLaunchTask": "tsc-watch",
"protocol": "auto",
"runtimeExecutable": "npm",
"runtimeArgs": ["run", "docker-debug"],
"port": 5858,
"restart": true,
"timeout": 60000,
"localRoot": "${workspaceFolder}",
"remoteRoot": "/server",
"outFiles": ["${workspaceFolder}/dist/**/*.js"],
"console": "integratedTerminal",
"internalConsoleOptions": "neverOpen"
}
]
}

task.json

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
{
"version": "0.1.0",
"tasks": [
{
"taskName": "tsc-watch",
"command": "npm",
"args": ["run", "watch"],
"isShellCommand": true,
"isBackground": true,
"isBuildCommand": true,
"problemMatcher": "$tsc-watch",
"showOutput": "always"
}
]
}

Running the project

In VSCode press F5 (fn + F5 if on a mac), every time you change files in ./app, it will reload application.

Unit Tests

1
npm run watch-test

Code Coverage

Start watching lcov report

1
Shift + CMD + 8
Share Comments

Code Coverage report with travis-ci, coveralls and fastlane

Recently, I am creating a new swift framework called Jellyfish, and I wish to include unit test and attach badges in the github repo (it is all about badges!). Here is how I setup the project

fastlane Settings

In project folder create Gemfile

1
2
3
4
5
6
7
8
9
10
# frozen_string_literal: true

source "https://rubygems.org"

git_source(:github) {|repo_name| "https://github.com/#{repo_name}" }

# gem "rails"
gem 'fastlane'
gem 'xcov'
gem 'cocoapods'

Run the following commands

1
2
3
4
5
bundle install --path vendor/
bundle exec fastlane init

# Select manual
# and press enter for all questions

Edit fastlane/Fastfile

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
# This file contains the fastlane.tools configuration
# You can find the documentation at https://docs.fastlane.tools
#
# For a list of all available actions, check out
#
# https://docs.fastlane.tools/actions
#

# Uncomment the line if you want fastlane to automatically update itself
# update_fastlane

default_platform(:ios)

platform :ios do
desc "Tests for iOS"
lane :unit_test do
scan(scheme: "Jellyfish-iOS") # Change to your schema
xcov(
scheme: "Jellyfish-iOS", # Change to your schema
output_directory: "xcov_output",
ignore_file_path: ".xcovignore"
)
end
end

platform :macos do
desc "Tests for macOS"
lane :unit_test do
scan(scheme: "Jellyfish-macOS") # Change to your schema
xcov(
scheme: "Jellyfish-macOS", # Change to your schema
output_directory: "xcov_output",
ignore_file_path: ".xcovignore"
)
end
end

You can try to test on local machine with

1
bundle exec ios unit_test

Travis-CI Settings

Go to Travis-CI and enable ci for your repo

In project folder, add .travis.yml

1
2
3
4
5
6
7
8
9
10
11
12
13
language: swift
osx_image: xcode9.1

cache:
- bundler

before_install:
- bundle install
- bundle update

script:
- bundle exec fastlane ios unit_test
- bundle exec fastlane macos unit_test

Push the code, and see it run on travis-ci!

Coveralls Settings

Go to Coveralls and enable code coverage for your repo.

Copy service_token on the page they redirect you to.

Go back to Travis-CI and in your project, select More Options -> Settings

Add an environment variable COVERALLS_REPO_TOKEN and the value is what you copy in Coveralls

Re-run the pipeline and wait for coveralls to update the code coverage percentage (some takes 8 hours for it to update)

Share Comments

Using C++ with swift

Recently I am writing an API Blueprint stubbing library in swift, and the first step is to parse the .apib file.

snowcrash is a c++ library to parse .apib and I wanted to include it as an library.

Here is what I setup for swift to call c++ static library

Xcode Settings


I assume you have compile snowcrash into an .a static library. You can find how to build on my fork on snowcrash

Other Linker Settings

In Build setting -> Other Linker Settings add -lsnowcrash

For any static c++ libraries, they will have lib prefix.

For example, if you want to use other static library called libexample.a, you can add another linker setting with -lexample

** You may want to include all .h file within library project folder.

Bridging Header

In your project you also need to add a bridging header. You can add an Objective-C class to trigger the prompt to generate a new header. Or you can add it in Build setting -> Bridging Header

C++


Add a wrapper.cpp in your project

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
#include <stdio.h>
#include "snowcrash.h"

extern "C" void snowcrashTest() {
mdp::ByteBuffer blueprint = R"(
# My API
## GET /message
+ Response 200 (text/plain)

Hello World!
)";

snowcrash::ParseResult<snowcrash::Blueprint> ast;
snowcrash::parse(blueprint, 0, ast);

std::cout << "API Name: " << ast.node.name << std::endl;

}

In your bridging header

1
void snowcrashTest();

Swift


After you add wrapper.cpp and add the function in bridging header, you can call the function directly in swift

1
2
3
4
5
6
7
import Foundation

public struct Jellyfish {
public static func printSnowcrash() {
snowcrashTest()
}
}
Share Comments

Writing, Mocking and Testing API document with API Blueprint, aglio, drakov and dredd

Sick of writing and reading API documents in shity Google Drive? You should learn how to write API documents with API Blueprint!

Writing Documentation


Before We Start

This blog post only include basic syntax to write .apib files that can be test and start a mock server with. Learn more advance syntax at API Blueprint

All the source mention below can be found here: https://github.com/darkcl/api-blueprint-stack

Folder structure

Setup the api folder structure like this:

1
2
3
4
5
6
7
./
├── docs
│   ├── request # Request Body JSON
│   ├── response # Response Body JSON
│   ├── schema # Schema JSON
│   └── blog.apib # API Blue Print Document
└── server # Server Code

I use VSCode as my editor to create and edit apib file. VSCode has an awesome plugins (API Elements) that will help you format and jump to different endpoint easily.

API Blueprint Syntax with include

Let’s create an api that list all post on a simple blog.

Create docs/blog.apib with following content

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
FORMAT: 1A

# Blog

Blog is a simple API to manage a blog

# Group Post

## Post Collection [/post]

### List All Post [GET]

+ Response 200 (application/json; charset=utf-8)

+ Body

<!-- include(response/post/get-200-list.json) -->

+ Schema

<!-- include(schema/posts.json) -->

The <!-- include( ... ) --> syntax can be used when you want to include content in a different file.

Reading the documents in browser

In this section, you will need aglio

1
npm i -g aglio

After install aglio, you can start a local server that serve your api document

1
aglio -i ./docs/blog.apib -t slate -s

You can export a static html too

1
aglio -i ./docs/blog.apib -t slate -o ./docs/index.html

Start a mock server with your .apib file


Compile apib without include tags

After you finish the documentation, you can start mocking server with drakov

But drakov cannot read include tags, so you need to run the following command

1
aglio -i ./docs/blog.apib --compile -o out.apib

Start Server

In this section, you need drakov

1
npm i -g drakov

Create a config.js

1
2
3
4
5
6
7
module.exports = {
sourceFiles: 'out.apib',
debugMode: false,
discover: true,
watch: true,
ignoreHeader: ['Accept', 'Content-Type']
};

Run the server

1
drakov --config ./config.js

You can try it out with curl

1
2
3
4
5
6
7
8
9
10
11
> curl http://localhost:3000/post
[
{
"id": 1,
"content": "Post #1"
},
{
"id": 2,
"content": "Post #2"
}
]

Multiple Response

In your documentation, you can define multiple responses

Add these line in your docs/blog.apib

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
### Create Post [POST]

+ Request

+ Body

<!-- include(request/post/post-create.json) -->

+ Response 200 (application/json; charset=utf-8)

+ Body

<!-- include(response/post/post-200-single.json) -->

+ Schema

<!-- include(schema/post.json) -->

+ Request

Duplicated post id

+ Body

<!-- include(request/post/post-create-duplicate.json) -->

+ Response 400

Define the request sample json in request/post/post-create.json

1
2
3
4
{
"id": 3,
"content": "Post #3"
}

Define fail case request sample json in request/post/post-create-duplicate.json

1
2
3
4
{
"id": 2,
"content": "Post #2"
}

Create sample response in response/post/post-200-single.json

1
2
3
4
{
"id": 3,
"content": "Post #3"
}

Compile the file and start server again

1
2
aglio -i ./docs/blog.apib --compile -o out.apib
drakov --config ./config.js

You can try out with curl

1
2
3
4
5
6
7
8
curl -X POST \
http://localhost:3000/post \
-H 'cache-control: no-cache' \
-H 'content-type: application/json' \
-d '{
"content": "Post #2",
"id": 2
}'

It should response with response code 400 and empty response body

Test your document with dredd


Starting a testing server with docker

In the real situation, you will start using dredd to test api endpoint when you finish / implementing your server

For this blog post, we can keep using the drakov server to try out dredd functions

Install and configure dredd

In your api document root folder, run the following command

1
npm i -g `dredd`

Create dredd.yml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
dry-run: null
hookfiles: null
language: nodejs
sandbox: false
server: null
server-wait: 3
init: false
custom: {}
names: false
only: []
reporter: []
output: []
header: []
sorted: false
user: null
inline-errors: false
details: false
method: []
color: true
level: info
timestamp: false
silent: false
path: []
hooks-worker-timeout: 5000
hooks-worker-connect-timeout: 1500
hooks-worker-connect-retry: 500
hooks-worker-after-connect-wait: 100
hooks-worker-term-timeout: 5000
hooks-worker-term-retry: 500
hooks-worker-handler-host: 127.0.0.1
hooks-worker-handler-port: 61321
config: ./dredd.yml
blueprint: out.apib
endpoint: 'http://localhost:3000'

Then start the test!

1
2
3
4
5
6
7
8
> dredd
info: Configuration './dredd.yml' found, ignoring other arguments.
info: Beginning Dredd testing...
pass: GET (200) /post duration: 32ms
pass: POST (200) /post duration: 10ms
pass: POST (400) /post duration: 6ms
complete: 3 passing, 0 failing, 0 errors, 0 skipped, 3 total
complete: Tests took 51ms

References & Source Location


You can clone the complete project here: https://github.com/darkcl/api-blueprint-stack

Share Comments

Chainable Methods and Builder Pattern in Swift

In one of my library (LayoutKit, it is not the LinkedIn one), you can layout a view with this code:

1
2
3
4
5
6
7
8
9
10
let containerView: UIView = UIView()
let box: UIView = UIView()
containerView.addSubview(box)

box.build { layout in
layout.centerX.equalTo().superView(.centerX)
layout.centerY.equalTo().superView(.centerY)
layout.width.equalTo().constant(30)
layout.height.equalTo().constant(30)
}

Builder Pattern

As you can see, build function is an extension in LayoutKit

1
2
3
4
5
6
7
8
9
10
11
import UIKit

public typealias LayoutContructor = ((LayoutBuilder) -> Void)

extension UIView {
public func build(layout: LayoutContructor!) {
let builder: LayoutBuilder = LayoutBuilder(view: self)
layout(builder)
builder.build()
}
}

The layout closure will pass a LayoutBuilder and this will expose to the library user to layout the view.

After that, the builder will build() all NSLayoutConstraint into the view.

Chainable Methods

This is one of the function that the builder can call

1
2
3
4
public func equalTo() -> Layout {
relationship = .equal
return self
}

The function will perform some actions and return self and it can call on other functions.

If the functions result is discardable, you can add @discardableResult before the function declaration.

1
2
3
4
5
@discardableResult
public func constant(_ layoutConstant: CGFloat) -> Layout {
constant = layoutConstant
return self
}
Share Comments

Setup RSS Reader with Stringer and dokku

Previously I have created a blog with dokku and I wanted to create some application with database.

This is a guide on setup your own Stringer rss reader with dokku.

Dokku Host configuration


Create App

Create the app the normal way

1
dokku apps:create rss

Stringer will use postgresql, install the plugin and create database

1
2
sudo dokku plugin:install https://github.com/dokku/dokku-postgres.git
dokku postgres:create rss-database

You need to link the app and database together

1
dokku postgres:link rss-database rss

Setup swap

For dokku to build Stringer image, you need more ram and swap space.

In my case, I created a 8G swapfile for it to build properly.

1
2
3
4
5
sudo fallocate -l 8G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab

Client-side configuration


Clone Repository

1
2
git clone git://github.com/swanson/stringer.git
cd stringer

Update Configuration files

I have tried to add environment variable in dokku, it will alway fail with invalid argument: -p $PORT while deploy.

Edit Procfile, replace $PORT to 8080

The content should be like this

1
2
web: bundle exec unicorn -p 8080 -c ./config/unicorn.rb
console: bundle exec racksh

Deploy to dokku


Push to dokku (Client-side)

1
2
git remote add dokku dokku@example.com:rss
git push dokku master

If it failed to deploy, go to remote host and run this command

1
dokku config:set rss CURL_TIMEOUT=600

After deploy (Host)

1
2
3
4
dokku config:set rss APP_URL="rss.example.com"
dokku config:set rss SECRET_TOKEN=`openssl rand -hex 20`
dokku run rss bundle exec rake db:migrate
dokku ps:restart rss

Let’s encrypt (Host)

1
2
3
dokku config:set --no-restart rss DOKKU_LETSENCRYPT_EMAIL=user@example.com
dokku letsencrypt rss
dokku letsencrypt:cron-job --add

Auto Refresh (Host)

Add this in your dokku host crontab -e

1
@hourly dokku --quiet --rm run rss bundle exec rake fetch_feeds

Fever API

Stringer include fever api, if your reader support this (Reeder / Unread), you can sync with the following info:

In this example, I use https://rss.example.com as your rss application url.

Fields Input
Server https://rss.example.com/fever
Email stringer
Password yourpassword
Share Comments

Setup Hexo with Dokku on AWS EC2

Recently, I bought a .tech domain on Mashable.com ($39.99 for 10 years) and want to make something cool about it.

I am a big fan of Heroku and always wish I can self-host an heroku-like service.

Then I read about dokku, it is a very cool software that replicate the deploy process of heroku.

This is a guide on how to setup a blog with Hexo, dokku and AWS EC2.

AWS Configuration


In this section, I assume you have already create an account on AWS

One-Click Deploy with dokku-aws

Go to dokku-aws Github Repo and click on Launch Stack

Follow the instruction and it will create an Ubuntu EC2 instance with dokku installed and an Elastic IP.

You can find all information in your AWS Console.

Update DNS Record

Copy your Elastic IP address and add 2 A Records in your domain DNS Setting, in this example, I use example.com.

Name IP
example.com Elastic IP
*.example.com Elastic IP

SSH Settings

You will need ssh and git to access and deploy applications to EC2 instance.

Add these line in your machine ~/.ssh/config.

In this example, I use example.com and the key location ~/.ssh/example-com.pem

1
2
3
Host example.com
Hostname example.com
IdentityFile ~/.ssh/example-com.pem

Enable SWAP (optional)

The free tier of EC2 instance has only 1 GB of ram, you may need more ram for dokku to build bigger dokcer images.

Create a 1GB swapfile with this command, you can change 1G to any size you want.

1
sudo fallocate -l 1G /swapfile

Verify the swap file size

1
ls -lh /swapfile

Change permission of the swapfile to 600

1
sudo chmod 600 /swapfile

Enable the swapfile

1
2
sudo mkswap /swapfile
sudo swapon /swapfile

Make it persistence

1
echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab

Dokku Configuration


Create App (your EC2 instance)

1
dokku apps:create blog

Enable SSL with Let’s Encrypt (your EC2 instance)

Install plug-in

1
sudo dokku plugin:install https://github.com/dokku/dokku-letsencrypt.git

Enable plug-in on your apps

In this example, I use `you@example.com` as your email.

1
2
3
4
5
dokku config:set --no-restart blog DOKKU_LETSENCRYPT_EMAIL=you@example.com
dokku letsencrypt myapp

# Auto-renew
dokku letsencrypt:cron-job --add

Hexo Configuration


Install Hexo

1
npm install hexo-cli -g

You can find more about Hexo here

Generate Site

After you setup your theme, write your first post. You generate site with this command

1
hexo generate

Deploy to dokku

After you generate site, the root folder should have public/

Create a new folder out side of hexo project folder and copy contents of public/ into it.

Create Dockerfile in this folder

1
2
3
FROM nginx:alpine
COPY . /usr/share/nginx/html
COPY nginx.vh.default.conf /etc/nginx/conf.d/default.conf

Create nginx.vh.default.conf in this folder

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
server {
listen 5000;
server_name localhost;

port_in_redirect off;

location / {
root /usr/share/nginx/html;
index index.html index.htm;
}

error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}

access_log /var/log/nginx/access.log combined;
}

Then create git repo in this folder

1
2
git init
git commit -am"My first blog post"

Then add remote pointing to your AWS EC2 instance

In this example, I use example.com as your domain.

1
git remote add dokku dokku@example.com:blog

Then push

1
git push --set-upstream dokku master
Share Comments