Posted by Brian Hicks on Mon, 10/03/2016 - 17:00

Sometimes JSON just doesn’t play nice with our nice type systems. Consuming JSON
from a wide variety of sources can be challenging. Even when working with an
internal team you can be dealing with strange encodings. One common pattern is a
JSON object with dynamic keys. How we deal with this depends on the semantics of
the data, but it breaks down into two distinct patterns.

Posted by Brian Hicks on Mon, 09/26/2016 - 17:00

Richard Feldman spoke two weeks ago at the first elm-conf. (it went quite well,
thank you for asking!) He pointed out something as a code smell that’s been
bothering me for a while. I want to emphasize it, so go ahead and watch the
recording and then we’ll talk about it. It’s only 25 minutes and well worth your
time:

Posted by Gizra on Sat, 09/24/2016 - 08:00

Baseball, Apple Pie, and Web Frameworks

I like baseball better than football (the American version). Football is a game of inches – but it’s measured in yards. Imprecise scope is built into the system. Baseball, on the other hand, is a game of wildly random occurrences that are often measured to the third decimal place. An entire framework exists to understand the smallest, yet important, details of the game.

What I like about baseball is what I like about the current state of web applications. There is a growing set of frameworks that allow you to “scratch your own itch” and be precise about your scope in ways that you never could before. I really like going to web conferences, and seeing things like Drupal being used as a backend to serve content to some other front-end framework (enter your favorite: Angular, React, Ember) that can bend, shape, and re-present that content in ways that Drupal never imagined.

And as a web development agency that focuses on complex content management, that has huge – and really exciting – implications for how we do business.

What’s a Major Corporatation To Do?

Take for example a conversation that started with someone that does training and assessment at a Fortune 50 corporation – one that trains a lot of employees. At some point she asked me “Do you guys do inbox simulations?” I had to think for a second if I even knew what inbox simulation was - and it turns out, it’s exactly what it sounds like: a simulated email inbox to test and assess an employee’s response and prioritization skills.

My first response was “No,” and my second – almost immediate – followup was, “but I don’t see why we couldn’t.”

The problem they had was that none of the software they had tried was giving them precisely what they needed. And that’s not surprising. There are more than 500 Learning Management Systems on the market, each with it’s own bloated feature set trying to solve specific use cases with general tools. It’s also not surprising that fewer than 25% of corporate LMS users are “very satisfied” with their system. Given the large features sets and the likely time it took to get them to market, most of them are probably built on technology that’s already five years old.

Enter Proof of Concept

That conversation led to a “show us what you can do” meeting. This was a problem because, well, we had never done it before. My five-or-six-years-ago brain said to myself, “We can’t possibly create a demo of an inbox simulation – I’ll just put together a slide deck that explains what I’m talking about here with web frameworks.”

That’s when Amitai said, “Let’s create an inbox simulation for your meeting – we can do it in Elm. Open a repo and I’ll show you how. What should it have?”

I answered (dubiously), “Well, it should look and feel like an Outlook inbox, and we should be able to demonstrate that their training logic can be applied to simple email tasks.”

“You mean like if you respond one way, you get a certain response back.”

“Yeah, something like that.”

The Scafolding of an Inbox

So I opened a repository for the project, which at Gizra starts with a stack that includes a location to create static prototype pages served by Jekyll and is automatically updated and published by Gulp. The Semantic UI CSS framework is included so that we get all the goodies that come with it and don’t need to recreate the wheel on design elements (we recently switched from Bootstrap, and I already like it a lot better, if not just because our prototypes don’t look like every Bootstrap website ever).

In a perfect world, I wanted three things:

  1. An inbox that looked realistic.
  2. A dashboard that reflected activity in the inbox.
  3. An admin screen that allowed manipulation of the inbox content.

I started with the admin screen, because that seemed the least daunting.

I am a terrible sketch artist, and I often don’t carry paper. This was sketched on a napkin from a coffee shop.

Once I had the idea, I moved quickly into the static prototype, because my CSS skills dramatically outweigh my drawing skills.

That’s a little better.

The next step was the inbox itself, and because I wanted it to look like Outlook, we figured we could try to grab the HTML, CSS and JS from an Outlook Online account that I had created for this purpose.

What a ridiculous mess that was.

Thirty minutes into that task, I realized it would be easier \ to recreate the inbox from scratch. Semantic UI made it pretty easy.The Font Awesome icons already there, and the fact that it’s flexbox friendly, meant that I had a pretty good static version up in about 4 hours (it could have been faster, but it was my first time really using Semantic UI, and I was trying to follow strict [BEM]{http://getbem.com/) principles, which we also recently started at Gizra).

Starting to look like the real thing.

And with that, I made a pull request, and went to bed.

Making it dynamic with Elm

The next day, some strange miracle had occurred.

Amitai had created a basic Elm app, converted my HTML markup to Elm, and created a basic model for a functioning app. I had heard Amitai speak and have read about how Elm’s compiler, which catches runtime errors, makes development so much faster, but seeing it in action was pretty amazing. Our conversation on Github:

The Elm is strong with this one.

Creating that dashboard referenced in the conversation was faitly easy too. Semantic UI has a lot of nice looking tables and classes to vary the look enough to get a lot of different options. I found one I liked, filled it with enough dummy data to give it the feel of a real dashboard, and we were all set.

In the meantime, Amitai created a nifty little delayed response function. If you choose a particular response (in this case, some version of “ignore”), you get a followup email demanding your attention.

Don’t ignore my emails!.

We spent the next day or two refining features, polishing the layout, and replacing the dummy text.

Of course, I needed to add a few tweaks to the layout, and add sample emails that were more realistic, and some logic that made sense. To do that I had to get into Elm and figure out how it works - in particular how to make it present the HTML syntax I needed. It turns out that was pretty logical and straightforward. I’ve barely scratched the surface, but I’m pretty pleased to have my first few commits on an Elm project.

The Result

You can look at the Github repository and try out the sample application, but the final product is a simple response to a complex need. We got there in several days by breaking down a complex problem into small surmountable tasks - a method we call The Gizra Way. In this case, we ignored all other features, even how to permanently store the data – just a simple single page application that shows a realistic inbox with a few features. I never got my admin screen - there wasn’t enough time, and there’s other stuff to do.

We are, of course, helped enormously by a robust set of web frameworks that are helping us do web tasks, faster, with greater flexibility, and with a preciseness like never before.

Continue reading…

Posted by Rundis on Thu, 09/15/2016 - 01:00

Version 0.4.0 marks the first version of Elm Light that uses ASTs to enable more advanced IDE like features.
This version includes features like; find usages, jump to definition, context aware auto-completer and some simple refactorings.
It’s early days, but I’m in no doubt it will enable some pretty cool features going forward.

Evan Czaplicki the author of Elm has told the community on several occations not to block on something
not being available from Elm. I’ll have to admit that I’ve been hoping for more tooling hooks from Elm for quite some time, an offical AST coupled with
the Elm compiler would be super sweet. It’s definitely on the roadmap, but not a high priority for Elm (right now).
My best bet would be to wait for the AST work put into elm-format to be made available. That might
actually not be to far off. But several weeks ago I decided I wanted to give it a shot to do something simplified on my
own. Mainly as a learning experience, but also to gather data for use cases that an AST can support and to learn a bit about parsing.

You’ll find a demo of the new features added in version 0.4.0 below. The rest of this post gives a brief description
of my journey to create a parser and how I integrated that into the plugin.

You can find the elm-light plugin here

Demo of 0.4.0 Features

ScreenCast demo

Other relevant demos:

Creating a parser

Researching

It actually started a while back when I bought a book about parsers. It was almost 1000 pages. It turned out
to be very uninspiring bed time reading. I guess I wasn’t motivated enough.

My only other experience with parsing since my University days was the stuff I did when porting rewrite-clj
to ClojureScript. That ended up becoming rewrite-cljs, which I’ve used for some othere Light Table plugins I’ve created.
But the syntax of Clojure is comparatively simple and also I did a port, so I can’t really claim any credits for the actual parsing anyways.

In the Clojure world I’ve used InstaParse which is a really neat library to build parsers.
It also has a ClojureScript port, which I though would be good fit for Light Table. I found an old BNF for Elm called elm-spoofax,
so I thought. Let’s give it a go. I spent a good week or so to get something that seemed to parse most Elm files I threw at it
and provided a tree of nodes which looked fairly decent to work with. However I hadn’t read the README for the CLJs port
that will and hadn’t really reflected on what an order of magnitude slower that it’s Clojure big brother actually meant.
With a couple of hundred lines I started seeing parse-times nearing a second. I’m sure it could be optimized and tuned somewhat,
but it was way off the mark of what I was going to need for continuos as you type parsing.

Back to the drawing board. I started looking at a ton of alternatives. Parser generators and parser combinators etc etc.

Enter PEG.js

After trying out a few parser generators I came across PEG.js. It looked approachable enough
to me and they even had a nice online tool. So I set out on my way and decided to keep it simple. Just parse
top level definitions. Spent a few days to get an initial version up and running. It was time to give it a performance test.
YAY, for most files I got < 10ms parse times for some quite big ones (thousands of lines) I started seeing 100ms parse times.
It still seemed worth pursuing. So I did !

PEG.js is a simple parser generator. It supports a syntax that is BNF like, but you can smatter it with
some JavaScript when appropriate. It also has nice error reporting and a few other nifty features.

module (1)
= declaration:moduledeclaration EOS
LAYOUT
imports:imports?
LAYOUT
toplevel:topLevelDeclarations?
LAYOUT
{
return {
moduledeclaration: declaration,
imports: imports,
declarations: toplevel
}
}

moduledeclaration (2)
= type:(type:("effect" / "port") __ { return type; })? "module" __ name:upperIds __ exposing:exposing
{
return {
type: (type||"" + " module").trim(),
name: name,
exposing: exposing
};
}

// .. etc

1
The top level rule. It sort of looks like BNF, but you’ll also notice some JavaScript

2
The rule for parsing the module declaration, which again uses other rules, which again …​

I basically used a process of looking at this old Elm BNF
as inspiration and then adjusting along the way. The PEG.js online tool was really helpful during this work.

Why a JavaScript parser generator ?

Well Light Table is based on Electron. So it’s basically a node server with a browser client build in.
Having a parser that plays seemlessly with the basic building blocks of the browser is both convenient
and practical in terms of distribution. I can just require the parser as a node module and off we go.

The second reason is that for example my Haskell foo is not up to scratch. I would love to do it in Elm
but current Elm combinator libraries just doesn’t provide enough building blocks for me to see
this as a competive or realistic alternative quite yet.

Designing for As You Type Parsing (AYTP ?)

The general idea I had was to design with the following in mind
- Parsing everything (including 3.rd party packages) when connecting, is a bearable price to pay to ensure everything is hunky dory and good to go once you are connected
- The design should support file changes not only from actions in the editor, but also from any outside process
- Things generally have to be asynchronous to ensure the Editor stays responsive at all times
- Only introduce (persistent) caching if there is no way around it

Listening for changes

To support parsing whenever a file changes or whenever you install or remove a package in your Elm projects
I opted for using Chokidar. Elmjutsu - an excellent Elm plugin for Atom
provided me with the inspiration here.

Each Elm project in Light Table will get it’s own node process running Chokidar. Whenever the appropriate events
are fired, it will parse the file(s) needed and notify the Elm plugin editor process with the results.

The code for initiating the watcher

var watcher = chokidar.watch(['elm-package.json', (1)
'elm-stuff/exact-dependencies.json',
'**/*.elm'], {
cwd: process.cwd(),
persistent: true,
ignoreInitial: false,
followSymlinks: false,
atomic: false
});

watcher.on("raw", function(event, file, details) { (2)
var relFile = path.relative(process.cwd(), file);
var sourceDirs = getSourceDirs(process.cwd());

if(relFile === "elm-stuff/exact-dependencies.json") {
if ( event === "modified") {
parseAllPackageSources(); (3)
}
if (event === "deleted") {
sendAstMsg({
type: "packagesDeleted"
});
}
}

if (isSourceFile(sourceDirs, file) && event === "modified") {
parseAndSend(file); (4)
}

if (isSourceFile(sourceDirs, file) && event === "deleted") {
sendAstMsg({
file: file,
type: "deleted"
});
}

if (isSourceFile(sourceDirs, file) && event === "moved") {
if (fileExists(file)) {
parseAndSend(file);
} else {
sendAstMsg({
file: file,
type: "deleted"
});
}
}
});

elmGlobals.watcher = watcher;
}

1
Start the watcher

2
To be able to handle renames and a few othere edge cases I ended listening for raw avents from Chokidar

3
Whenever this elm file changes is very likely that’s due to a package install, update or delete of some kind
The time spent for parsing all package sources is proportionally small compared to the time spent on
a package install so this "brute-force" approach actually works fine.

4
Parsing a single file on change and notifying the editor process with the results is the common case

Caching the ASTs

In the Elm Light plugin Editor part, a Clojure(Script) atom is used to store all projects and their ASTs. Not only does it
store AST’s for you project files, but it also stores ASTs for any 3.rd party packages your project depends on.
That means that it does use quite a bit of memory, but profiling sugggest it’s not too bad actually.
The great thing now is, that I have a Clojure datastructure I can work with. Slice and dice, transform and do all kinds of stuff with
using the full power of the clojure.core API. Super powerful and so much fun too :-)

But what about this parsing as you type then ?

Well for every open Elm editor, there is a handler for parsing the editors content and update the AST atom.
Again the actually parsing is performed in a node client process, otherwise the editor would obviously have ground to a halt.

It looks something like this:

(behavior ::elm-parse-editor-on-change (1)
:desc "Parse a connected elm editor on content change"
:triggers #{:change}
:debounce 200 (2)
:reaction (fn [ed]
(object/raise ed :elm.parse.editor))) (3)

(behavior ::elm-parse-editor (4)
:desc "Initiate parsing of the content/elm code of the given editor"
:triggers #{:elm.parse.editor :focus :project-connected }
:reaction (fn [ed]
(when (not (str-contains (-> @ed :info :path) "elm-stuff"))
(let [client (get-eval-client-if-connected ed :editor.elm.ast.parsetext)
path (-> @ed :info :path)]

(when (and client
(= (pool/last-active) ed))

(clients/send client (5)
:editor.elm.ast.parsetext
{:code (editor/->val ed)}
:only ed))))))

(behavior ::elm-parse-editor-result (6)
:desc "Handle parse results for a parsed editors content"
:triggers #{:editor.elm.ast.parsetext.result}
:reaction (fn [ed res]
(if-let [error (:error res)]
(do
(object/update! ed [:ast-status] assoc :status :error :error error)
(object/raise ed :elm.gutter.refresh))
(let [path (-> @ed :info :path)]
(object/update! ed [:ast-status] assoc :status :ok :error nil)

(elm-ast/upsert-ast! (-> (get-editor-client ed) deref :dir) (7)
{:file path
:ast (:ast res)})
(object/raise ed :elm.gutter.exposeds.mark)))

(elm-ast/update-status-for-editor ed)))

1
This the behaviour (think runtime configurable event handler) that triggers
parsing whenever the editor contents change.

2
Parsing all the time is not really necessary for most things, so a debounce has
been defined to not spam the node client

3
We delegate to the behaviour below which is a more generic trigger parsing behavior

4
This behavior is responsible for sending off a parse request to the node client

5
We send the parse request to the node client

6
Once the node client process has finished parsing this behviour will be triggered with the result

7
We update the AST atom with the AST for this particular combination of project and file represented by the editor

We only update the AST on succesful parses. A lot of the time when typing the editor contents will naturally not
be in a correct state for parsing. We always keep track of the last valid state, so that allows the plugin
to still provide features that doesn’t necessarily need an completely current AST.

There is always an exception

Things was working quite well initially, managed to get several features up and running.
But when I started to rewrite the auto completer from using elm-oracle
I hit a few killer problems;
- The contiuous parsing started to tax the editor to the point that things became unusable
- With debouncing I didn’t have accurate enough results to provide a proper context for context aware completions
- I discovered general performance problems in how I’ve written my ClojureScript code
- For large files synchrounous parsing was out of the question

Auto completers are tricky and doing it synchronous was proving useless for Elm files larger than a few hundred lines.
Back to the drawing board.

Tuning

So providing hints for the autocompleter definitely has to happen asynchronously.
But even that was to taxing for larger files and AST. So I spent quite some time optimizing
the ClojureScript code. Turning to JavaScript native when that was called for. Heck I even threw in memoization
a couple of places to get response times down. Even turning JSON into EDN (clojure data format) had to be tweaked to
become performant enough. The whole process was quite challenging and fun.
There are still things to be tuned, but I’ll wait and see what real usage experience provides in terms of cases worth
optimizing for.

Partial synchronous partial parsing

The autocompleter is async, but for some cases it turned out to be feasible to do a partial
parse of the editors contents. PEG.js has a feature to support multiple start rules, so I ended
up defining a start rule that only parses the module declaration and any imports.
That allowed the context sensitive hints for module declartions and imports to have a completely up to date
AST (well as long as it’s valid) and at the same time keep the autocompleter responsive enough.

Really large files

Depending on who you ask, you might get a different definition, but to me Elm files that are several thousand
lines are large. So hopefully they are more the exception than the rule. But for files of that
size the autocompleter will be a little slugish. Not too bad (on my machine!), but you will notice it.

If you experience this, do let me know. And also be aware that turning off the auto-completer is deffo and option
and easy for you to do. The guide contains instructions for how to do that.

Refactoring

It would be really neat if I could refactor in the AST itself and just "print" the update result
back to the editor. However with the complexities of the AST already, the fact that I’m not even parsing everything yet
and all interesing challenges with an indentation sensitive language with lot’s of flexibility in terms of comments and whitespace…​
Well that’ll have to be a future enterprise.

That’s not entirly true though. For a couple of the features I sort of do that, but only for a
select few nodes of the AST, and the change is not persited to the AST atom (think global database of ASTs).
So it’s like a one-way dataflow:

  • get necessary nodes from AST atom
  • update the node(s)
  • print to editor
  • editor change triggers AST parsing for editor
  • node client notifies editor behaviour responsible for updating the AST atom
  • AST Atom gets updated
  • The AST atom is up to date, but slightly after the editor

(behavior ::elm-expose-top-level
:desc "Behavior to expose top level Elm declaration"
:triggers #{:elm.expose.top.level}
:reaction (fn [ed]
(let [path (-> @ed :info :path)
prj-path (project-path path)
module (elm-ast/get-module-ast prj-path path) (1)
exposing (-> module :ast :moduleDeclaration :exposing)] (2)

(when-let [decl (elm-ast/find-top-level-declaration-by-pos (3)
(editor/->cursor ed)
module)]
(when-not (elm-ast/exposed-by-module? module (:value decl))
(let [{:keys [start end]} (elm-ast/->range (:location exposing))
upd-exp (elm-ast/expose-decl decl exposing) (4)
pos (editor/->cursor ed)
bm (editor/bookmark ed pos)]
(editor/replace ed (5)
start
end
(elm-ast/print-exposing upd-exp))
(safe-move-cursor ed bm pos)))))))

1
Get the AST root node for the module the current editor represents

2
From that retrieve the exposing node (this is the one we want to update)

3
Find the declaration to expose based on where the cursor is placed in the editor

4
Update the exposing AST node to also expose the given declaration in <3>

5
Overwrite the exposing node in the editor, that works because we have the current location
of it already :-)

Once the editor is changed, the normal process for updating the global AST atom is triggered.

Summary and going forward

Writing a parser (with the help of a parser generator) has been a really valuable learning experience.
After my failed attempt with InstaParse, it’s hard to describe the feeling I had when I saw the numbers
from my PEG.js based implementation. I tried to talk to my wife about it, but she couldn’t really see what the fuzz was all
about !

I’ll continue to make the parser better, but the plan isn’t to spend massive amounts of time on making that perfect.
I’d rather turn my attention on trying to help the Elm community and it’s tooling people access
to an AST on stereoids. My bet is that the AST from elm-format is going to be the way forward, so I’ll try
to help out here. Hopefully my own experience will be useful in this process.

I’m pretty sure I can carry on to make some pretty cool features with the AST i already have,
so there will defininetely be some cool stuff coming in Elm Light in the near future regardless
of what happens in the AST space and tooling hooks for Elm in general.

Posted by Brian Hicks on Mon, 08/22/2016 - 17:37

There have been several recent questions on the elm-discuss mailing list about decoding large JSON objects.
The problem: Elm’s decoders provide for decoding objects with up to 8 fields, but what happens when you need more?
The solution here is, unfortunately, not super obvious.

Posted by Brian Hicks on Mon, 08/15/2016 - 17:00

Last time we talked about using <| and |>.
<| and |> allow you to create pipelines through with data can flow (like water.)
That’s all well and good, but what if you need pipes without the water?
Well, that’s easy enough to do with function composition!

Posted by Brian Hicks on Mon, 08/08/2016 - 17:00

Say you’ve got a bunch of functions, and you want to use them together.
This is a common situation, but it can get a little… messy.
Let’s take an example from the Elm docs:

scale 2 (move (10,10) (filled blue (ngon 5 30)))

This is, well, just OK.
A little parentheses go a long way, but this is just unclear.
You have to follow them very closely to figure out the evaluation order.
Editor highlighting can help, but wouldn’t it be better to get rid of the problem?
But how do we do that?

Posted by Gizra on Thu, 07/28/2016 - 00:00

I work at Gizra, so it was only a matter of time before Elm infected me as well, and I think it’s growing on me.

I wanted to build something a little different, not just the plain old TodoMVC. So, I harnessed every bit of creativity I had and came up with the most radical idea ever - I took the TodoMVC in Elm and got it to work in Electron, and called it Elmctron (I know, so creative of me).

Electron enables you to build cross platform desktop apps with web technologies. So we can take all the goodies we get with Elm and use them in our desktop application. It’s a brand new world!

It was my thought that we should build a couple of gulp tasks to make our life easier - to do the bare minimum because after all, who wants to do more than we he have to? (let’s hope my boss will not read this part)

So, with that in mind, the only commands I want to run are git clone .., npm install, and gulp. The gulp tasks should:

  • Compile SASS to css.
  • Compile Elm to JS.
  • Watch and auto-reload.
  • Automagically download and install elm packages.
  • Start the electron app.

Continue reading…

Posted by Brian Hicks on Mon, 07/25/2016 - 19:35

Elm is usually pretty clear, but there are certain things that are a little hard to search for.
One of those is the ! operator, introduced in 0.17.
What does it do?
Where does it come from?
And even more important, when should you use it?

Posted by theburningmonk.com on Mon, 07/18/2016 - 12:06

Tweet Hello, Recording of my Elm talk at Polyconf this year is available now.   Tweet

Posted by LambdaCat on Sun, 07/10/2016 - 23:15

So, Purescript.

If you know Elm and want to add to your toolset a language, which actually has those higher abstractions that you've heard about, Purescript is the obvious choice.

Purescript compiles to Javascript too, it can interoperate with any Javascript library, has no runtime, and has most of those

Posted by LambdaCat on Fri, 07/08/2016 - 18:34

I was supposed to give this talk at the July Cambridge DDD night, but I was ko'd by a bad flu, so I sent this video instead.

I did it while running a fever, but apparently it wasn't too bad, so here it is.

If any errors made it in,

Posted by LambdaCat on Sun, 07/03/2016 - 00:19

If you're familiar with Linux, you have certainly encountered pipes:

find / -name somename | grep ...

they take the result from one command and pass it to the next function, allowing you to chain multiple command together.

Road to Elm - Toc

They're helpful and convenient, and take advantage of the compositionality

Posted by LambdaCat on Sat, 07/02/2016 - 23:57

Chances are, if you've been reading Elm code older than a year, that you've seen a strange squiggly symbol <~.

It used to be exposed by the Signal namespace, but was then obsoleted in favour of just using its full name Signal.map (which is now itself obsolete).

in merge

Posted by Gizra on Thu, 06/16/2016 - 00:00

I’m going to give an Elm session in the next YGLF conf. This was a great excuse to free up some hours to work on a new v0.17 SPA (Single Page Application). You won’t believe what happened next…

Well, actually, you would: it was an awesome experience :)
In fact I’ve reached the point that the backend me is becoming jealous of frontend me.

View demo
Get the source code

Fetch GitHub user’s info on this fake login.

My goal with building this demo app, was to give a small, yet realistic, look into how Elm
allows us to accomplish daily tasks such as Http requests, routing, access, and more.
It was important for me to structure it in the same way that we structure larger apps built for production, so that it could demonstrate more effectively how Elm can be used in a project.

If you are interested in Elm, and want to get a feeling of how it could be built for your apps, this might be a good starting point. I’ve even wanted to add a single test to show how that could be done. But Elm being such a fun, predictable, opinionated, and fun (no mistake here, it deserves the double fun) to work with, I’ve ended up adding more and more tests.
Isn’t that yet another great sign for Elm? I was adding unit tests for a demo app, while we hardly added any unit tests for our Angular apps in production!

I was holding myself from adding too many features, but I couldn’t resist polishing the existing ones, and adding lots of comments. With the compiler’s tough love and ever growing unit tests, any change was
so easy it almost felt like cheating (and note that I rarely write “easy” or “trivial” on development issues).

Continue reading…