Posted by Gizra on Fri, 02/16/2018 - 08:00

Chances are that you already use Travis or another cool CI to execute your tests, and everyone politely waits for the CI checks before even thinking about merging, right? More likely, waiting your turn becomes a pain and you click on the merge: it’s a trivial change and you need it now. If this happens often, then it’s the responsibility of those who worked on those scripts that Travis crunches to make some changes. There are some trivial and not so trivial options to make the team always be willing to wait for the completion.

This blog post is for you if you have a project with Travis integration, and you’d like to maintain and optimize it, or just curious what’s possible. Users of other CI tools, keep reading, many areas may apply in your case too.

Unlike other performance optimization areas, doing before-after benchmarks is not so crucial, as Travis mostly collects the data, you just have to make sure to do the math and present the numbers proudly.


To start, if your .travis.yml lacks the cache: directive, then you might start in the easiest place: caching dependencies. For a Drupal-based project, it’s a good idea to think about caching all the modules and libraries that must be downloaded to build the project (it uses a buildsystem, doesn’t it?). So even a variant of:

- $HOME/.composer/cache/files

or for Drush

- $HOME/.drush/cache

It’s explained well in the verbose documentation at Before your script is executed, Travis populates the cache directories automatically from a successful previous build. If your project has only a few packages, it won’t help much, and actually it can make things even slower. What’s critical is that we need to cache slow-to-generate, easy-to-download materials. Caching a large ZIP file would not make sense for example, caching many small ones from multiple origin servers would be more beneficial.

From this point, you could just read the standard documentation instead of this blog post, but we also have icing on the cake for you. A Drupal installation can take several minutes, initializing all the modules, executing the logic of the install profile and so on. Travis is kind enough to provide a bird’s-eye view on what eats up build time:

Execution speed measurements built in the log

Mind the bottleneck when making a decision on what to cache and how.

For us, it means cache of the installed, initialized Drupal database and the full document root. Cache invalidation is hard, we can’t change that, but it turned out to be a good compromise between complexity and execution speed gain, check our examples:

Do your homework and cache what’s the most resource-consuming to generate, SQL database, built source code or compiled binary, Travis is here to assist with that.

Software Versions

There are two reasons to pay attention to software versions.

Use Pre-installed Versions
Travis uses containers of different distributions, let’s say you use trusty, the default one these days, then if you choose PHP 7.0.7, it’s pre-installled, in case of 7.1, it’s needed to fetch separately and that takes time for every single build. When you have production constraints, that’s almost certainly more important to match, but in some cases, using the pre-installed version can speed things up.

And moreover, let’s say you prefer MariaDB over MySQL, then do not sudo and start to install it with the package manager, as there is the add-on system to make it available. The same goes for Google Chrome, and so on.
Stick to what’s inside the image already if you can. Exploit that possibility of what Travis can fetch via the YML definition!

Use the Latest and (or) Greatest

If you ever read an article about the performance gain from migrating to PHP 7, you sense the importance of selecting the versions carefully. If your build is PHP-execution heavy, fetching PHP 7.2 (it’s another leap, but mind the backward incompatibilities) could totally make sense and it’s as easy as can be after making your code compatible:

language: php
- '7.2'

Almost certainly, a similar thing could be written about Node.js, or relational databases, etc. If you know what’s the bottleneck in your build and find the best performing versions – newer or older – it will improve your speed. Does that conflict with the previous point about pre-installed versions? Not really, just measure which one helps your build the most!

Make it Parallel

When a Travis job is running, 2 cores and 4 GBytes of RAM is available – that’s something to rely on! Downloading packages should happen in parallel. drush make, gulp and other tools like that might use it out of the box: check your parameters and configfiles. However, on the higher level, let’s say you’d like to execute a unit test and a browser-based test, as well. You can ask Travis to spin up two (or more) containers concurrently. In the first, you can install the unit testing dependencies and execute it; then the second one can take care of only the functional test. We have a fine-grained example of this approach in our Drupal-Elm Starter, where 7 containers are used for various testing and linting. In addition to the great execution speed reduction, the benefit is that the result is also more fine-grained, instead of having a single boolean value, just by checking the build, you have an overview what can be broken.

All in all, it’s a warm fuzzy feeling that Travis is happy to create so many containers for your humble project:

If it's independent, no need to serialize the execution

Utilize RAM

The available memory is currently between 4 and 7.5 GBytes , depending on the configuration, and it should be used as much as possible. One example could be to move the database main working directory to a memory-based filesystem. For many simpler projects, that’s absolutely doable and at least for Drupal, a solid speedup. Needless to say, we have an example and on client projects, we saw 15-30% improvement at SimpleTest execution. For traditional RMDBS, you can give it a try. If your DB cannot fit in memory, you can still ask InnoDB to fill memory.

Think about your use case – even moving the whole document root there could be legitimate. Also if you need to compile a source code, doing it there makes sense as well.

Build Your Own Docker Image

If your project is really exotic or a legacy one, it potentially makes sense to maintain your own Docker image and then download and execute it in Travis. We did it in the past and then converted. Maintaining your image means recurring effort, fighting with outdated versions, unavailable dependencies, that’s what to expect. Still, even it could be a type of performance optimization if you have lots of software dependencies that are hard to install on the current Travis container images.

+1 - Debug with Ease

To work on various improvements in the Travis integration for your projects, it’s a must to spot issues quickly. What worked on localhost, might or might not work on Travis – and you should know the root cause quickly.

In the past, we propagated video recording, now I’d recommend something else. You have a web application, for all the backend errors, there’s a tool to access the logs, at Drupal, you can use Drush. But what about the frontend? Headless Chrome is neat, it has built-in debugging capability, the best of which is that you can break out of the box using Ngrok. Without any X11 forwarding (which is not available) or a local hack to try to mimic Travis, you can play with your app running in the Travis environment. All you need to do is to execute a Debug build, execute the installation part (travis_run_before_install, travis_run_install, travis_run_before_script), start Headless Chrome (google-chrome --headless --remote-debugging-port=9222), download Ngrok, start a tunnel (ngrok http 9222), visit the exposed URL from your local Chrome and have fun with inspection, debugger console, and more.


Working on such improvements has benefits of many kinds. The entire development team can enjoy the shorter queues and faster merges, and you can go ahead and apply part of the enhancements to your local environment, especially if you dig deep into database performance optimization and make the things parallel. And even more, clients love to hear that you are going to speed up their sites, as this mindset should be also used at production.

Continue reading…

Posted by Gizra on Tue, 02/06/2018 - 08:00

Elm’s type system is sufficiently sophisticated that you’ll often want to make
fine-grained distinctions between roughly similar types. In a
recent project, for instance, we ended up
with a separate type for a Mother and a Child.1 Now, a Mother
is a little different than a Child. Most obviously, mothers have children,
whereas (at least, in our data model) children do not. So, it was nice for them
to be separate types. In fact, there were certain operations which could be done
on a Mother but not a Child (and vice versa). So it was nice to be able to
enforce that at the type level.

Yet, a Mother and a Child clearly have a lot in common as well. For this
reason, it sometimes felt natural to write functions that could accept either. So, in
those cases, it was a little awkward for them to be separate types. Something
was needed to express a relationship between the two types.

What alternatives are available to do this sort of thing in Elm? Which did we
end up choosing? For answers to these questions, and more, read on!

  1. There were no fathers in our app’s data model. 

Continue reading…

Posted by Brian Hicks on Thu, 02/01/2018 - 15:56

The State of Elm 2018 is on!
We put on this survey every year to take the pulse of the Elm community.
Where have we been, and where are we going?

The survey is open now and will close on March 1.
You can take it below or in full screen mode (better for mobile users.)

Continue Reading

Posted by Alex Korban on Tue, 01/23/2018 - 02:00

I'm working on a tool that handles PostgreSQL EXPLAIN output in JSON format.

The data consists of a tree of nodes representing different parts of a query execution plan:

Sort on zone_idHash JoinSeq Scan on zonesHashSeq Scan on projects

Each node has a lot of attributes (more than 10), but with a significant portion of attributes common to all nodes.

The large number of attributes led me to use the Json.Decode.Pipeline package because it makes them easier to handle.

First attempt: universal...

Posted by Alex Korban on Tue, 01/23/2018 - 02:00

I'm working on a tool that handles PostgreSQL EXPLAIN output in JSON format.

The data consists of a tree of nodes representing different parts of a query execution plan:

Sort on zone_idHash JoinSeq Scan on zonesHashSeq Scan on projects

Each node has a lot of attributes (more than 10), but with a significant portion of attributes common to all nodes.

The large number of attributes led me to use the Json.Decode.Pipeline package because it makes them easier to handle.

First attempt: universal...

Posted by Gizra on Fri, 01/05/2018 - 08:00

Once you start writing apps (and packages) in Elm, it’s
tempting to avoid the rough-and-tumble world of Javascript as much as possible.
Yet when implementing features for paying clients, it doesn’t always make sense
to take things that already have a Javascript implementation and re-implement
them in pure Elm. In fact, sometimes it isn’t even possible!

Now, Elm has a very fine mechanism for integrating bits of Javascript when
necessary – ports!
Yet ports aren’t always the right answer, and there are several alternatives
which can be useful in certain situations.

For the purposes of this post, I’m going to assume that you’re familiar with
the many cases in which ports work well, and focus instead on a few cases where
you might want to try something else:

  • When you want synchronous answers.
  • When you need some context when you get the answer.
  • When you want to manage parts of the DOM using Javascript.

Continue reading…

Posted by Gizra on Tue, 01/02/2018 - 08:00

I tell my kids all the time that they can’t have both - whether it’s ice cream and cake or pizza and donuts - and they don’t like it. It’s because kids are uncorrupted, and their view of the world is pretty straightforward - usually characterized by a simple question: why not?

And so it goes with web projects:

Stakeholder: I want it to be like [insert billion dollar company]’s site where the options refresh as the user makes choices.

Me: [Thinks to self, “Do you know how many millions of dollars went into that?”] Hmm, well, it’s complicated…

Stakeholder: What do you mean? I’ve seen it in a few places [names other billion dollar companies].

Me: [Gosh, you know, you’re right] Well, I mean, that’s a pretty sophisticated application, and well, your current site is Drupal, and well, Drupal is in fact really great for decoupled solutions, but generally we’d want to redo the whole architecture… and that’s kind of a total rebuild…

Stakeholder: [eyes glazed over] Yeah, we don’t want to do that.

But there’s is a way.

Continue reading…

Posted by Gizra on Mon, 12/25/2017 - 00:00

If you happen to know Brice - my colleague and Gizra’s CEO - you probably have picked up that he doesn’t get rattled too easily. While I find myself developing extremely annoying ticks during stressful situations, Brice is a role model for stoicism.

Combine that with the fact that he knows I dislike speaking on the phone, let alone at 6:53pm, almost two hours after my work day is over, you’d probably understand why I was surprised to get a call from him. “Surprised” as in, immediately getting a stomach ache.

The day I got that call from him was a Sale day. You see, we have this product we’ve developed called ״Circuit Auction״, which allows auction houses to manage their catalog and run live, real-time, auction sales - the “Going once, Going twice” type.

- “Listen Bruce,” (that’s how I call him) “I’m on my way to working out. Did something crash?”
I don’t always think that the worst has happened, but you did just read the background.
- “No.”

I was expecting a long pause. In a way, I think he kind of enjoys those moments, where he knows I don’t know if it’s good or bad news. In a way, I think I actually do somehow enjoy them myself. But instead he said, “Are you next to a computer?”

- “No. I’m in the car. Should I turn back? What happened?”

I really hate to do this, but in order for his next sentence to make sense I have to go back exactly 95 years, to 1922 Tokyo, Japan.

Continue reading…

Posted by Brian Hicks on Wed, 09/06/2017 - 05:39

Breaking Down Decoders From the Bottom Up
Last week, we covered how to break down decoders by starting from the innermost, or topmost, part.
But what if you’re having trouble breaking things down from the top?
(Or you’re dealing with a really complex JSON schema?)

This week, let’s look at it from a different perspective: the outermost structure in (or the bottom up!)

Continue Reading

Posted by Brian Hicks on Wed, 08/30/2017 - 19:58

Breaking Out of Deeply Nested JSON Objects
A reader of the JSON Survival Kit wrote me with a question (lightly edited):

I’ve got a JSON string that works fine in JavaScript:

"Site1": {
"PC1": {
"ip": "x.x.x.x",
"version": "3"
"PC2": {
"ip": "x.x.x.x",
"version": "3"
"Site2": {
"PC1": {
"ip": "x.x.x.x",
"version": "3"
"PC2": {
"ip": "x.x.x.x",
"version": "3"

I really can’t figure out how to parse this–will your book help with nested JSON where the keys are different 2 or 3 levels deep?

If not, then I’ll just give up on Elm–as this is the first project that I’m trying to do, and something as basic as this, I’m finding impossible.

The biggest mindset shift you need to succeed with JSON Decoding is to think of your decoders like bricks.
(I’ve written about this before, and it’s chapter 1 of The JSON Survival Kit.)
You can combine bricks to build whatever you like; the same is true of decoders!

Continue Reading

Posted by Ilias Van Peer on Wed, 08/02/2017 - 13:09

The one runtime exception nearly every Elm developer will encounter sooner or later is this one, dealing with recursive JSON decoders:

Uncaught TypeError: Cannot read property ‘tag’ of undefined at runHelp

ContextLet’s say you are writing a decoder for a recursive structure: hypothetical JSON looking like this: first attemptThe most straightforward approach to this type of decoder, is to create a branch-decoder, a leaf-decoder and a tree-decoder, which would look something like this:, Elm is an eager language, and functions are evaluated as soon as all of their arguments are passed. In the above example, where decoders are simply values, we’re dealing with recursively defined values, and you can’t do that in an eager language. Elm will, of course, point this out in its usual, friendly manner. lazinessAfter reading the linked document, you know you need to introduce laziness using — in this case — Json.Decode.lazy. You may be wondering where to put the call to lazy: should you lazily refer from decoder to branchDecoder, or should it be the other way around?The slightly surprising answer is this:

There is no way to know for sure.

Elm orders its output based on how strongly connected different components are. In other words, a function that has more dependencies is more likely to appear later, a function that has fewer dependencies is more likely to appear earlier. In our case, that makes the most likely order leafDecoder -> branchDecoder -> decoder.If we lazily refer to branchDecoder from decoder, this order doesn't change; and branchDecoder will still eagerly refer to decoder whose definition appears only later in the compiled code.’s have a look at what that would compile to.Sidenote: this is not quite what the compiled code looks like. I’ve stripped out the module-prefixes, partial application calls through the A* functions, and an Elm List really isn’t a JS array. However, it illustrates the core issue. that there’s only a single function definition in this whole thing - everything else is eagerly evaluated and has all of its argument available. After all, decoders are values! Note, also, that branchDecoder is defined before decoder is defined; yet it references decoder. Since only function declarations are hoisted to the top in JavaScript; the above code can't actually work! decoder will be undefined when branchDecoder is used.A second attemptOur second option is moving the lazy call so branchDecoder lazily refers to decoder instead: look at the pseudo-compiled code shows that we have reached our goal, this time: for the correct solutionThe order in which compiled results appear in the output isn’t something you, while writing Elm code, should worry about. Figuring out the strongly connected components to figure out where the lazy should go, is not what you want to be thinking about.Worse yet, slight changes to your decoders might result in the order changing in the compiled code!The only real option when dealing with this type of recursion is to introduce lazy in both places: post was extracted from a document originally part of my Demystifying Decoders project. Learn more about that here:Demystifying Elm JSON DecodersHelp, my recursive Decoder caused a runtime exception! was originally published in Ilias Van Peer on Medium, where people are continuing the conversation by highlighting and responding to this story.

Posted by Brian Hicks on Thu, 07/27/2017 - 19:02

The State of Elm 2017 results are here!
I first presented these as two talks, one to Elm Europe and one to the Oslo Elm Day.
I’ve embedded the Elm Europe talk below (it’s shorter) but the longer version is on also YouTube.

Continue Reading

Posted by Ilias Van Peer on Sat, 07/08/2017 - 11:53

I figured it would be fun to take a tiny function and explain how it works line by line.’s examine that, line by line, function by function. Noting down the link to the documentation, the signature of each function used and what the inferred types look like at that point should prove — if nothing else — interesting to some! HTTP requests in Elm, line-by-line was originally published in Ilias Van Peer on Medium, where people are continuing the conversation by highlighting and responding to this story.

Posted by Ilias Van Peer on Sat, 07/01/2017 - 19:38

elm-reactor is an underrated tool. Not only does it do on-demand recompilation of Elm source code, but it can serve up other assets, too.But did you know you can serve your own HTML with live-compiled Elm code, too? This is useful if you need JS interop or want to start your program with flags.The trick here is that elm-reactor exposes a “magical” /_compile directory — any elm file prefixed with that path will be pulled in and compiled on page-load.For example, start with a folder-structure like this:myProject/|- elm-package.json|- index.html`- src/ `- Main.elmPlacing your index.html at the same level as your elm-package.json means that running elm-reactor from your myProject folder will allow you to point your browser to http://localhost:8000/index.htmlAs for the contents of your index.html, start with something like this:<html><head> <style> /* custom styles? Sure! */ </style></head><body> <!-- Relative to index.html, main.elm lives in `src/Main.elm`. --> <!-- Prefixing that with `/_compile/` gives us magic! --> <script src="/_compile/src/Main.elm"></script> <script> var app = Elm.Main.fullscreen() // You could also pass flags, or setup some ports, ... </script></body></html>There, all set!Note that elm-reactor has also learned how to serve quite a few other file types with the correct content type headers, so pulling in some CSS, images or JSON should work, too.Shout-out to @ohanhi for the tip!Elm reactor and custom HTML was originally published in Ilias Van Peer on Medium, where people are continuing the conversation by highlighting and responding to this story.

Posted by Ilias Van Peer on Wed, 06/28/2017 - 16:27

JSON Decoders be what?Coming from JavaScript, where JSON is the most natural thing ever, having to write decoders to work with JSON in Elm is a mystifying experience. On some level, you understand that you need some way to convert these foreign objects into statically typed structures for safe use in Elm. And yet, it’s… weird.Some people learn best by reading about how they work, and looking at examples. Other people learn best by doing: writing decoders, from “very simple” to “I didn’t know you could do that”, progressively cranking up the complexity level, and knowing that what you wrote, is correct.To that end, I’ve put together a series of exercises that aim to walk you through writing JSON decoders. Each exercise aims to be a little more difficult than the one before, and introduce new concepts at a fairly reasonable pace.zwilias/elm-demystify-decodersThe code is there, all contributions are welcome, if you get stuck on something, create an issue and someone will help you out sooner or later.Now go and write some decoders!Demystifying Elm JSON Decoders was originally published in Ilias Van Peer on Medium, where people are continuing the conversation by highlighting and responding to this story.