After I decided to make public a telegram bot to monitor bus time in Dublin (@dublin_bus_bot). Before the release I became curious to see how many people will use it (spoiler: just an handful) and I thought that would be a good idea to track the use on google analytics.
Google analytics provide a measurement protocol that can be used to track things that are different from websites (mobile apps, IOT). At the moment no elixir client exists for this protocol (and it would not be anything more than an api wrapper). My plan is to make call to the Google Analytics TK endpoint with HTTPOison but I’d prefer to not have to call the tracking function for every single bot command.
One of the feature that I prefer of the elixir are macros, macros allow to generate code at compile time. I decided to define a macro that looking like a function definition would define a function with the same body and with an additional call to the track function. I decided this approach because seems more idiomatics than using the decorator syntax typical of other languages (
I implemented this approach in meter to use in the telegram bot I wrote.
Elixir macros are a powerful tool to abstract away some functionality or to write DSLs. They require a bit of time to wrap head around, in particular with the context swith, but it totally worth the hassle if you can reduce the clutter in your code base.
I played for some time with the idea of having a telegram bot run serverless in the cloud. Obviously the code run on some server but it is not necessary to care to provision, deploy, starting the application, etc. All you care about is your code.
GC Functions can be triggered by Pub/Sub events, buckets events and HTTP invocations. The latter is the one that we are going to provide as webhook to Telegram to be invoked when a message is sent to our bot.
Functions are going to remove some friction from our code, when the request is set with the appropriate
application/json header the parsed json will be available on the request and when we send back an object is automatically serialized and sent back to the client.
The example code of the project can be found at https://github.com/carlo-colombo/serverless-telegram-bot-gc-functions
- Google Cloud account and a project. https://cloud.google.com/resource-manager/docs/creating-managing-projects
- Enable Google Cloud Functions and RuntimeConfig API from the API manager.
- Get a telegram bot token, ask it to the BotFather.
- Both Google Cloud Functions and RuntimeConfig are both still in beta.
- Even if the GCP free tier is quite extended some costs can be billed.
Just an easy bot that echos the received message.
This function return a (promise) of a token either from the runtime config api when run online or from an environment variable when run locally. The value is retrieved using fredriks/cloud-functions-runtime-config that wraps the api.
NODE_ENV is set to production when the function is run online, thus allowing to discriminate in which environment the function run.
Google provide a local emulator for Functions feature. It allow to local deploy a function to iterate over it without having to deploy to the google server. It reload the code when changed on the file system so it is not necessary to redeploy after the first time.
Before deploy the function is required to create a Cloud Storage bucket where the function will be stored
Deploying the function with the http trigger will return an url to trigger the function. The url would look like
https://<GCP_REGION>-<PROJECT_ID>.cloudfunctions.net/function_name. Use this url to set up a web hook for your bot on telegram. You can check more information on webhook on the Telegram API documentation
Setting up a Telegram bot using Google Cloud Functions is quick and easy, and with the HTTP trigger is possible to seamlessy set a webhook endpoint for a bot without having to care about a server and https certificates (http trigger are https).
One last thing to keep in mind is that Functions are stateless and require to be connected to other services to store data or be for example scheduled.
Docker Cloud (formerly Tutum) help to deploy containers image on node clusters. Nodes can be provisioned directly from the service (Digital Ocean, Azure, AWS, Packet.net, and IBM SoftLayer). Additionally is possible to use the function Bring your own node (BYON) to add any linux server connected to the internet as node and deploy on it.
I’m using this service to manage a stack (a set of images described by a file similar to docker-compose.yml) composed of a static webiste served by nginx, two api server built with elixir, nginx-proxy (for reverse proxing) and jrcs/letsencrypt-nginx-proxy-companion (create/renewal of Let’s Encrypt certificates automatically). Dcoker cloud provide with an interface to start/stop containers and scale up the same image to multiple nodes.
BYON requires some manual intervention, installing an agent and usually open a port (2375) in to the server firewall to let docker cloud communicate with the agent - additional ports are required to allow network overlay.
Scaleway is a cloud provider still in beta that offer the smaller server (VC1S - 2 x86 64bit Cores, 2GB memory - 50GB SSD Disk - 200Mbit/s badnwidth) for the price of 2.99 €/month. You can request an invite to the beta at https://www.scaleway.com/invite/
To open the port requested by the agent to communicate with docker cloud you need to go to security, pick one of the security group and open the necessary port as shown below. A security group is a set of firewall rules that could be applied to multiple servers.
I set up an ubuntu image (14.04) on the server run the command shown in the BYON pop-up on the server, the agent download and install docker and a few service images. After the installation complete it should connect to the Docker Cloud server and update the pop-up with a success message. Taking some times to connect I checked the log of the agent
/var/log/dockercloud/agent.log and saw the following error.
To solve this issue is possible is necessary to create some loopback devices, once done the agent start docker and notify Docker Cloud that is ready. Once done is possible to start containers on the newly provided node.
I’m building a bot for Telegram, once make a build with exrm I found myself some problem configuring the telegram api key using environment variables. I decided to share what I found because my google foo was not helpful at all.
config.exs is where the configuration of elixir project are added. The file is interpreted when a project is ran with
ies -S mix or
The values are retrieved during the lifetime of the application with the functions(get_env/3, fetch_env/2, fetch_env!/2) available in the module Application. To include values from the environment where the project is running System.get_env/1 is used.
Surprisingly (if you did not have an erlang background as me) when you make a release of your project with exrm the
config.exs file is executed at build time and the environment variables are crystallized in the build output.
The output from exrm contains a file named
sys.config that is the output of executing the
config.exs file and is defined as erlang terms. Once released editing this file is the only way to dynamically configure the application once built.
conform is a library from the same author of exrm and is been made to ease the configuration of elixir application. The library validate a property like file (
configuration/your_app.conf) against a configuration schema (
configuration/your_app.schema.exs) and generate the
sys.config file. The schema file contains descriptions, defaults, and types of the parameters. A property file is a lot easier and common to configure than a file containing erlang terms, additionaly conform add flexibility and more control over configurations.
A couple of task are made available to transition to a conform based configuration.
sys.config is generated at the start of the application, and using the plugin
exrm_conform at the start of a packaged application. This behaviour allow to load configuration parameter from environment variables defined when the application is started.
mix conform.new you will find a
yourapp.schema.exs in your
conf folder, this file has 3 main sections: mapping, transforms, and validators. The mapping section is where the parameters are defined and where you can set up defaults, descriptions, and type. The transform section allow to add transformation function to change or derive a configured parameter. In the end the validators sections allow to reject invalid configuration errors and stop the application.
At first I tried to add the reading from the environment variable to the default of the parameters, but this lend to an uncommon situation where a static parameter (the one in
yourapp.conf) will override a parameter derived from an environment variable.
Eventually I found that adding a function to transforms is probably a better way to do it.
In this case we are loading with the conform API the the configured value and return it only if the environment variable is empty. Generating the parameter in this way disallow to have a default in the mapping sections but a workaround would be to chain
|| to add a default value. I think that this approach is not bad for api key and similar values that you don’t want to checkin with your code (even if are keys of development environments).