Posted by wintvelt on Wed, 01/18/2017 - 18:34

Some of your types come with Free FunctionsConstruction | Image credit : Haitham alfalah, via Wikimedia CommonsConsider a typical type definition of aMsg type in elm:type Msg = UserInput String | LoggedIn User | Guest User | ClickedLogoffIn this definition, the types are

  • Msg: this is the type that is being defined.
  • String: this is the (built-in) type we are using ‘inside’ the Msg type.
  • User: some custom type

What about UserInput, LoggedIn, Guest and ClickedLogoff? They are functions called constructors. You get them for free. If you define the type, you get the constructors as well.Constructor functionsUserInput is a function with a capitalized name. It has the signature String -> Msg. Which means it takes a String and outputs a Msg.And it’s called a constructor. Because it constructs a Msg type. Insomething = UserInput value, the UserInput value part constructs a Msg. So the variable something will be of the type Msg and will hold the resulting value of UserInput value.LoggedIn is also a function, which takes a User as its argument. It outputs a Msg too. And Guest is a function too, with the exact same argument User, and the same signature.ClickedLogoff is also a constructor, of an equally useful nature (albeit somewhat boring). It takes nothing, and simply returns a Msg.You typically call a function like UserInput in your view. You pass it to the onInput handler, when you render the input field:input [ type_ "text", onInput UserInput ] ...The onInput handler is from the Html.Events library. It has the following type signature.onInput : (String -> msg) -> Attribute msgThis function takes one argument, a function. It wants you to provide a function that turns a String into a msg. The msg in the signature is in lowercase. The onInput function does not care what type the msg is. That is up to you. It will simply include that type in the output of the function, in the Attribute msg.The UserInput constructor defined earlier matches this signature and does exactly what’s needed here: it takes a String (which is whatever the user typed in the input field), and turns it into a Msg.Constructors in pattern matchingYou can also use these constructors to access the contents of a variable later on. Using a case .. of statement we can

  1. Find out which constructor was used.
  2. Access the contents “behind” that constructor (if there is any).

In the example, both the LoggedIn constructor and the Guest constructor carry a User. With a case statement, we can find out which constructor was used, and access the user information:case msg of LoggedIn normalUser -> .. handle the info from the variable normalUser... Guest guestUser -> .. handle info from guestUser .. Records come with constructors too!Consider this record definition in Elm:type alias User = { name : String , age : Int }By naming this record User, you also get a constructor for free. The constructor is also called User, and it has the signature (in this example) String -> Int -> User. This function allows you to do User "Bill" 42 to create a new user record.At first, it may seem weird — maybe even conflicting — that the type and its constructor have the same name. But they will never show up in the same place in code:

  • In type definitions of other types, it is always the type, never the constructor. In the Msg type definition, the User that shows up (twice) is the type.
  • When User shows up in code in type signatures like setName : String -> User -> User, it is always the type, never the constructor.
  • In all other places in your code, if you encounter a User, it will be the constructor.

Type alias or type?When you make a type for the user record from the example above, you can actually do this in two ways:-- Option 1:type alias User = { name : String , age : Int } -- Option 2 (called "opaque", or "strong typed")type User = User { name : String , age : Int }In option 2 you make the constructor function explicit. I could have given the constructor any name, but if you have only 1 constructor, it is common to give it the same name as the type.The compiler is much more forgiving on type aliases. You are basically saying that User is just an alias for anything that is a record with a String called name and an Int called age. So if you have some function defined as:hasLegalAge : User -> BoolhasLegalAge user = ...It is fine if you call it like this:canBuyOurStuff = hasLegalAge { name = "", age = 28 }With a strong typed User, the compiler will not let you do that. It will complain that you passed some record to hasLegalAge, but it expected a real (strong) User type.The real User type can be made by calling the constructor function, which is also called User, so if you change the code to this, it will compile again:canBuyOurStuff = hasLegalAge (User { name = "", age = 28 })The second strong typed code looks much more complicated, so what’s the use of strong types?For one, it makes it easier to extend the User type. If at some point you want to deal with anonymous users, you can add another constructor to your type:type User = User { name : String , age : Int } | AnonymousAnd the canBuyOurStuff function from above will still work (!). So less refactoring if you extend your strong type with more constructors.Another advantage to strong types: If you are making a library, you can make more changes without breaking other code. This happens if you do not make the User constructor available to the rest of the world. If your module definition is like thismodule User exposing (User, newUser, setName, ...)You would expose the User type, and some custom (lowercase) functions to create or modify the User. This gives you the freedom to make changes to the inner workings of the User type, without breaking the existing functions you exposed. You could add fields for e.g address, hobbies, pets, friends etcetera to the record (and expose more functions to read and write them). And you can do this without breaking the original functions, so that anyone or any code that imports the original functions can still use them without refactoring.Further readingIf you want to know more about types (or if my post was still not clear), here are some other good resources:

The first time I got really confused about types and constructors with the same name, was when I got started with Json decoding. Which is a mind-bender in itself. The example with the function map2 caused a short-circuit in my brain: “Whut? What is this type doing in a function?” It took me a while to figure out that this was not the “type” that was being referenced, but the constructor for the type alias I defined. [AHA moment]Hopefully this post will save you from a similar short-circuit. Happy coding!

An intro to constructors in Elm was originally published in Elm shorts on Medium, where people are continuing the conversation by highlighting and responding to this story.

Posted by Brian Hicks on Mon, 01/16/2017 - 17:00

 Union and Remove
With folds done, our sets are shaping up.
Folds unlock some more interesting things for us.
Namely: unions!

Continue Reading

Posted by Brian Hicks on Mon, 01/16/2017 - 17:00

 Union and Remove
With folds done, our sets are shaping up.
Folds unlock some more interesting things for us.
Namely: unions!

Continue Reading

Posted by Brian Hicks on Fri, 01/13/2017 - 16:00

Create Custom JSON Decoders in Elm 0.18
You’ve modeled your data exactly how it should be, and everything’s working fine.
Now it’s time to finish your JSON Decoder, but certain fields are strings where in your Elm code they’re complex data types!
This happens most often with dates, but tagged unions have this problem too.

In 0.17 we had customDecoder, which could turn any Result String a into a Decoder a, but it went away in 0.18.
So… what do we do?

Continue Reading

Posted by Brian Hicks on Fri, 01/13/2017 - 16:00

Create Custom JSON Decoders in Elm 0.18
You’ve modeled your data exactly how it should be, and everything’s working fine.
Now it’s time to finish your JSON Decoder, but certain fields are strings where in your Elm code they’re complex data types!
This happens most often with dates, but tagged unions have this problem too.

In 0.17 we had customDecoder, which could turn any Result String a into a Decoder a, but it went away in 0.18.
So… what do we do?

Continue Reading

Posted by wintvelt on Wed, 01/11/2017 - 16:44

Keep current page and browser bar in sync, and add data to routesContinue reading on Elm shorts »

Posted by Brian Hicks on Fri, 01/06/2017 - 15:00

Announcing The JSON Survival Kit
You know how it takes so much effort to produce even the simplest of programs when JSON parsing is involved?
Wouldn’t it be nice if you could breeze right on by that step and get on with writing your business logic?

This is what you’ll get with The JSON Survival Kit, a short ebook on JSON decoding in Elm.
You’ll learn how to piece the JSON decoder API together in a way that works for your situation so you can get back to solving your problems.
Get step-by-step instructions for avoiding boilerplate, write decoders so adapting to new data is a breeze, and finally understand why the time and effort is worth it (hint: it so is.)

Never get stuck on JSON Decoding again!

Continue Reading

Posted by Brian Hicks on Fri, 01/06/2017 - 15:00

You know how it takes so much effort to produce even the simplest of programs when JSON parsing is involved?
Wouldn’t it be nice if you could breeze right on by that step and get on with writing your business logic?

This is what you’ll get with The JSON Survival Kit, a short ebook on JSON decoding in Elm.
You’ll learn how to piece the JSON decoder API together in a way that works for your situation so you can get back to solving your problems.
Get step-by-step instructions for avoiding boilerplate, write decoders so adapting to new data is a breeze, and finally understand why the time and effort is worth it (hint: it so is.)

Never get stuck on JSON Decoding again!

Continue Reading

Posted by wintvelt on Thu, 01/05/2017 - 14:53

You can, but you probably shouldn’tContinue reading on Elm shorts »

Posted by Ilias Van Peer on Tue, 01/03/2017 - 22:27

https://medium.com/media/9cb143a9e57e9ff5b6ff3ab59aeb0927/hrefWait what?Somewhere around april, I had a couple of minutes and decided that it would be cool to take a snapshot of myself, every time I git commit anything. So, using a couple of lines of ruby I found somewhere in a PR I created a git hook to do exactly that.Faces. Many faces.In order to make sure I’d be able to use this on all my repositories without having to add it manually each and every time (which I’d surely forget), I setup a template directory and ensured the hook was in there and marked as a post-commit hook.Show me the goodsFirst, ensure you’ve installed imagesnap. Assuming you’re using homebrew like the rest of us:$ brew install imagesnapNow, let’s get you set up with a git template directory:https://medium.com/media/44a63d98ddcc66a6d006952600bc3549/hrefWant to just… Execute all of the above, all lazy-like?$ curl https://git.io/vMIGe | bashYeah, I know. Piping curl through bash is generally a very bad idea. More hereEt voilà, we have a working git post-commit hook. Now, let’s see all your faces in just about a year.A year of commits — a visual rundown was originally published in Ilias Van Peer on Medium, where people are continuing the conversation by highlighting and responding to this story.

Posted by Ilias Van Peer on Mon, 01/02/2017 - 21:25

We live in modern times. We have an incredible wealth of tooling for our development needs, many of it “language agnostic”. We have IDE’s capable of managing source files in many different languages, build-tools that may compile and run code written in many different languages, and so much interoperability it’ll make your head float. Many of our test-runners have pluggable assertion frameworks, and our IDE’s are fairly capable of running our tests and rendering a nice test-tree with pretty results.A pretty test-tree! Source: JetBrainsWhat’s odd, however, is the degree of coupling in our test-runners. More specifically, the degree of coupling between the runner itself, and the output format. Running most test-runners on the commandline, will output some pretty, human readable format, and — after completion — optionally dump an JUnit XML report somewhere, so your CI environment can pick it up and keep track of your test results.As an aside, I should probably mention that the JUnit XML format isn’t properly specified anywhere. There have been some attempts at creating an XSD for it, but the only real source of truth is the code of the JUnit ANT task. Read on for more information.This means, however, that — in order to get a pretty test tree — your IDE has to either:

  • parse the custom CLI output and somewhat magically convert it to something it can render in a pretty tree
  • wait for the run to finish, take the XML report and interpret it
  • do deep, custom integration with your test framework so it can keep informed of progress

Not only that, but this means that, if you upgrade your testing framework, you may lose the ability to run tests in your IDE directly, or have some corner-case situation where half your test results go missing.Note: In all of the above, I’m using “your IDE” as an example. The main point I’m making is that we’re dealing with a huge degree of coupling between a test framework and its output format.So the closest thing we have to a universal test output format is the JUnit XML format, which is an excellent report format, but not a streaming output format.As a developer, this smells of tight coupling and a violation of the separation of concerns design principle.Enter TAP.TAP — the Test Anything Protocol — is a sort-of human-readable, plaintext output format which introduces this separation of concerns in the simplest way possible:

TAP Producer (i.e. your test harness) produces TAP.

TAP Consumer (i.e. your test result reporter) consumer TAP.

It started life as part of the test harness for Perl but now has implementations in C, C++, Python, PHP, Perl, Java, JavaScript, and others. And it’s great. Got a TAP producer? Install a random TAP consumer, and get your test-results in nyan-cat style.Nyan cat. Nyan cat. Cat cat cat. Cat! Source: calvinmetcalf/tap-nyanHowever, for all of its nice additions to the unit testing world, TAP ain’t perfect. It has a specification, but most producers take some creative liberty. Furthermore, although it looks as if it’s a line-based format; its inclusion of inline YAML documents means that any parser needs to buffer and take care of state. Furthermore, there are some practical shortcomings.TAP doesn’t allow declaring testsTrying to generate any kind of test tree before actually reporting on their success or failure is, as such, impossible. Out of the box, TAP supports 2 statuses and has 2 directives that change the meaning of such a test:

  • ok means.. Well, the test has passed.
  • not ok signifies failure. Obviously.
  • # SKIP can be applied either to a test or a plan; meaning the tests’ status should not be recorded.
  • # TODO can be applied to a not ok test and signifies that it’s currently failing and still needs implementation.

Lacking a way to declare a test means it is impossible to create a tree before starting your tests or tracking runtime. Furthermore, de default Perl test-harness, when using subtests, will (by necessity) output the results of the subtests before output the result of the parent test. Naively, one could just link subtest results to the first test that follows. But even that isn’t sufficient, as subtests of different parent tests may arrive in an out of order fashion.TAP isn’t a line based protocolAlthough TAP is a test-based protocol, it is no longer a line-based protocol since the introduction of inline YAML documents. Intrinsically, a YAML document requires buffering the entire document before parsing it and attaching that result as a diagnostic to whichever test preceded it. This means a lot of context tracking. Worse, this means a TAP parser cannot report a test result before the next test result arrives, which serves as a trigger that no diagnostics are attached.Besides, as subtest results may be reported before the parent test is reported, a TAP parser needs to keep track of its preceding context.TAP is kind of awkwardHighly subjective but I feel that this merits mentioning either way.It was meant to be human readable, but it’s not clear at all. Especially when subtests and YAMLish diagnostics are used, parsing TAP output — as a human — is, well, awkward. At some point, someone agreed and tried to introduce an alternative syntax: UTO. However, that failed to gain much (if any) traction.TAP suffers from keeping backwards compatibilityThis is a rather sticky subject with many, widely differing opinions, but it needs to be mentioned either way. Introducing better syntax for subtests, for example, is hard to do without breaking implementations that can only parse output in an older version of the spec. Of course, this is kind of to be expected; but as outlined in the TAP Philosophy:

TAP should be forward compatible.

As their format is a rather non-extendable one, there are limited possibilities for introducing new syntax:

  • pragmas — which were introduced under the radar, and are intended to influence how the TAP stream is parsed
  • directives — special comments, basically
  • break older parsers

TAP doesn’t really support structuring testsThat is to say — you may structure your tests whichever which way you like, but the output is currently limited to a single level or — when using a parser compatible with subtests, which isn’t officially part of the specification — two levels at most. In today’s day and age where projects often have a couple of thousand tests, this is a rather limiting factor.TAP requires its tests to output in orderTest output in TAP can (optionally) contain a testnumber. When this testnumber is present, the numbers need to be strictly incrementing. Basically, this is a case of “abusing your identifier”. It severely hampers running tests in parallel, too, as you can’t just interleave the output of many threads.TAP does not have a clear diagnostics format

Currently (2007/03/17) the format of the data structure represented by a YAML block has not been standardized. It is likely that whatever schema emerges will be able to capture the kind of forensic information about a test’s execution seen in the example above.

Almost 10 years later, a standard schema has not yet emerged. So yeah.In conclusion, TAP is rather broken. Many sad.A broken tap. I’ve seen worse. Courtesy of yourrepairExit TAPCan we fix it? I think TAP as a format is not the way to go. Rather, a new and improved generic test output protocol (or GTOP for short) would be required.Let’s start with a number of use cases that should, ultimately, be supported when using this GTOP.A note on wording.

  • GTOP message: A message, formatted according to the imaginary GTOP spec.
  • GTOP stream: A line-separated list of GTOP messages, with new messages appearing as they are created.
  • GTOP producer: A program capable of generating a GTOP stream. Usually, a test-runner or pluggable reporter for a test-runner.
  • GTOP consumer: A program capable of receiving a GTOP stream and handling its messages. This will usually entail visualizing the results described by the GTOP stream.
  • GTOP parser: A library or module that is leveraged by a GTOP consumer to convert the messages on a GTOP stream into objects or events or whatever you have that can be handled by the consumer.

UC1: As an end user, I can run my unit tests, written using a test-runner capable of generating GTOP output, and visualize the results in my IDE, my CI environment, or — by piping the output into a command line GTOP consumer — on the command line.

UC2: As GTOP parser, I am perfectly content handling the output on a line by line basis; and do not need to keep track of previous output in order to verify the validity of the received GTOP output. Ideally, I can use widely available tooling to parse and verify GTOP output, without having to write custom parser logic or keeping it to the bare minimum.

UC3: As a GTOP consumer, I am able to deterministically create and update my tree, using the GTOP messages provided to me by the parser. Even though these messages may spawn from different threads or even different processes altogether, this does not influence my ability to keep track of the state of my tree.

UC4: As an end user, looking at the visualization of my test results; I should be able to locate the definition of the test file, if supported by the used GOP consumer.

UC5: As an end user, I can easily identify why my test failed, if supported by the used GOP producer and consumer.

UC6: As an end user, I can see that PHPUnit reported risky tests and eslint gave a warning.

Now let’s extract some requirements. Having requirements will help us in creating a format, by checking that all requirements can be fulfilled.Must haves:

  • R1. Use a simple, yet extensible format. [Covers: UC2]
  • R2. Create a simple, machine parsable specification for this format. [Covers: UC2]
  • R3. Allow nesting structures. [Covers: UC3]
  • R4. Provide a way to declare tests and test suites. [Covers: UC3]
  • R5. Unordered messages should be possible. [Covers: UC3]
  • R6. Allow multiple types of test failures and passes. [Covers: UC6]

Nice to haves:

  • R6. Unified way of encoding the location of tests. [Covers: UC4]
  • R7. Codify how to provide encoded diagnostics (expected vs actual for assertions, failure message, stack traces, whatever you have). [Covers: UC5]
  • R8. Be open for extension, closed for modification. In other words, best effort forward compatibility.

[UC1] is covered by the separation of concerns between producer and consumer. [R8] is to ensure that the format does not prevent other use cases from arising and other requirements to be created and handled.Enter GTOPGTOP, short for Generic Test Output Protocol, is a proposed protocol to deal with the above requirements. In short, each GTOP message would be a JSON object, with each message appearing on its own line. Since JSON is supported in a multitude of languages, this would make it extremely easy to generate the messages using a simple interface.Using http://json-schema.org/, we can ensure that there’s a proper specification for the messages, which developers of GTOP producers can use to validate their output against, and developers of GTOP consumers can use to validate incoming messages.Other than simply having the protocol, guidelines should also be created (and maintained!), helping the developer community maintain consistent results across the board.Message formatLet’s start with the basics. What should a GTOP message look like? In order to somewhat help parsers, I’ll suppose its first level should quite simply show the type of message. Now, keeping in mind the GTOP is not supposed to be a format for saving reports, but rather reporting running tests, let’s go wild and say that each message should only contain a single entry.So, let’s start with the simplest message I can come up with — 1 single test, no grouping whatsoever.https://medium.com/media/9bac50b088c63c0dd97fd5f8475337a8/hrefNote that the whitespace I’ve included here, means that this is not actually a valid GTOP message. A GTOP message should not include line breaks, since the GTOP stream itself is already supposed to be line delimited!Naturally, that’s not going to cover a whole lot of cases. So let’s add some more.https://medium.com/media/f6ffdae9727eb45313d0d8217be8b38f/hrefAgain, this is not a valid GTOP stream due to the inclusion of line breaks within the GTOP messages. Just formatting for human readability.So, using just 2 message types — testStarted and testResult – we’ve created a tiny little tree with one branch and one leaf node. We’ve indicated relations between the branch and its leaves, and we’ve indicated their status after running, together with how long they’ve run for. That’s a “yay!” as far as I’m concerned.So, GTOP is a pretty stupid, hard to remember and serious name. So let’s call it something else. Let’s call it Kerchief.For more information about the Kerchief protocol, I’ll refer you to its official minisite: https://kerchief.ioCreating adoptionOne additional problem with TAP that we haven’t really talked about, is its reluctancy to gain much traction. Obviously it’s popular with the Perl guys, as its the default format for many of its tooling (Test.pm, Test::Simple, Test::More, etc, etc). Although there’s quite a few modules for different languages for both producing and consuming TAP, still many developers have never heard of TAP, nor do many (consciously) use it.Obviously, introducing a new output format for tests is not an easy task.Furthermore, documentation and specification. In order to properly collect documentation and maintain a properly (versioned) specification, I’ve created a github organisation. The documentation and specification will, for now, appear on this little website Feel free to request access if you feel like you have anything to contribute, and we’ll talk it over.I also plan on putting together a couple of supporting projects:

Perhaps I’ll also make some conversion filters — TAP to Kerchief and Kerchief to TAP. Finally, a commandline Kerchief reporter could be cool.Unit-test output formats — a state of affairs was originally published in Ilias Van Peer on Medium, where people are continuing the conversation by highlighting and responding to this story.

Posted by wintvelt on Thu, 12/29/2016 - 19:25

Towers of Hanoi with HTML5Continue reading on Elm shorts »

Posted by Brian Hicks on Thu, 12/29/2016 - 16:00

Adding New Fields to Your JSON Decoder
Adding and changing new fields in your JSON API is just a part of life.
We’ve got to have ways to deal with that!

In Elm, it’s easy to add new fields with optional from Json.Decode.Pipeline.
Let’s do it!

Continue Reading

Posted by Brian Hicks on Thu, 12/29/2016 - 16:00

Adding and changing new fields in your JSON API is just a part of life.
We’ve got to have ways to deal with that!

In Elm, it’s easy to add new fields with optional from Json.Decode.Pipeline.
Let’s do it!

Continue Reading

Posted by Brian Hicks on Mon, 12/26/2016 - 17:00

Banish Type Tedium with JSON to Elm
When you’re writing JSON decoders, it’s helpful to understand what’s going on.
When you’re up in the clouds with your JSON workflow doing all sorts of fancy and advanced stuff, it’s great!

But what about when you don’t need all the fancy stuff?
(Or you’re just getting started?)
Meh.

It’s a hassle to write decoders, objects, and encoders for every single field by hand.
It feels like tedious boilerplate.
Pass.

But really, you don’t have to do it all by hand.
Please meet JSON to Elm.

Continue Reading