Posted by wintvelt on Wed, 12/21/2016 - 20:26
Posted by dennisreimann on Tue, 12/20/2016 - 02:00

The beauty of The Elm Architecture lies in its simplicity:
It structures applications into four parts and it defines how these interact with each other.
In Elm there is only this one determined way to handle interactions and manage state –
and it provides a good foundation for modularity, code reuse, and testing by default.

Posted by Brian Hicks on Mon, 12/19/2016 - 17:00

We’re in the middle of a quest to build a set implementation from scratch.
So far, we’ve implemented our constructors, rotation, balancing, size, and member.
Last week we stopped off to review how folds work.
This week, we’re going to create folds for our set!

Continue Reading

Posted by wintvelt on Wed, 12/14/2016 - 15:12

Comparing different approaches to cleaning your Elm codeContinue reading on Elm shorts »

Posted by Brian Hicks on Mon, 12/12/2016 - 17:00

Welcome back!
We’re in the middle of a series about implementing functional data structures in Elm.
In part one we implemented the skeleton of our sets using a binary search tree.
Last week, part two, we added membership tests and size to our set.
This week we’re going to make a quick pit stop to talk about how folds work, and next week we’ll implement them for our set.

Posted by wintvelt on Mon, 12/05/2016 - 20:49

The pure version and how it compares to statefulContinue reading on Elm shorts »

Posted by wintvelt on Mon, 12/05/2016 - 20:49

Experimenting with componentsContinue reading on Elm shorts »

Posted by Brian Hicks on Mon, 12/05/2016 - 17:00

Now that our trees balance themselves we can keep replicating the built-in set API.
This week, we’ll answer two questions:

  1. Is an item in the set?
  2. How many items are in the set?

And we’re going to do it recursively!

Posted by wintvelt on Tue, 11/29/2016 - 11:14

Using extractions for friendlier codeContinue reading on Elm shorts »

Posted by Brian Hicks on Mon, 11/28/2016 - 04:25

In this series, we’ve already implemented sets, convenient ways to create them and rotation.
Next, we’ll make our trees use those rotation functions!

Posted by Trouble (::)entrating on Fri, 11/25/2016 - 23:32

Take a look at this shortened version of the package.json file for knex,
a SQL query builder and driver for popular SQL databases in JavaScript.

Before proceeding I want to take a second and let you know that knex is one
of my favorite libraries. I absolutely recommend it and want to make sure it’s
clear that none of this post should be taken as criticism of knex or its
author, Tim Griesser. I chose it (a) because it happens to be a good example of something that
could benefit from dealing with the larger issue in question and (b) so that
people who read this will know that it exists and check it out.

{
"name": "knex",
"version": "0.12.6",
"description": "A batteries-included SQL query & schema builder for Postgres, MySQL and SQLite3 and the Browser",
"main": "knex.js",
"dependencies": {...},
"devDependencies": {...},
"buildDependencies": [...],
"scripts": {...},
"bin": {
"knex": "./bin/cli.js"
},
"repository": {
"type": "git",
"url": "git://github.com/tgriesser/knex.git"
},
"keywords": [...],
"author": {
"name": "Tim Griesser",
"web": "https://github.com/tgriesser"
},
"browser": {
"./lib/migrate/index.js": "./lib/util/noop.js",
"./lib/bin/cli.js": "./lib/util/noop.js",
"./lib/seed/index.js": "./lib/util/noop.js",
"mssql": false,
"mysql": false,
"mysql2": false,
"mariasql": false,
"pg": false,
"pg-query-stream": false,
"oracle": false,
"strong-oracle": false,
"sqlite3": false,
"oracledb": false
},
"files": [...],
"license": "MIT"
}

After reading through this file, what can you explain about where this library
is meant to be used? We installed it from NPM, and the description and
dependencies (redacted) mention things like SQLite, so we can assume it’s meant
for node. From there we can guess that it works in all versions of node, because
there is no engines field in the package.json describing which versions. We
can also learn that knex runs in the browser because it is mentioned in the
description, and we have the browser field, which, while
standardized on
its own, is a separate effort from those governing the rest of the package.json
format. We also don’t know which browsers this library will work in. If we
read the documentation for knex, we learn that we can use it to work with
browsers that support Web SQL, which is now no longer an active specification,
not fully supported, and may disappear from future browser versions
.

Anyway, between the package.json, the library’s documentation, and documentation
for different platforms mentioned in the docs we can figure out where we can
reasonably expect to see our lives improved by using knex. So what’s the
problem?

Tools like npm and webpack aren’t going to go read the knex docs, nor are they
going to read caniuse.com and MDN and figure out if knex will even work in the
environments we might be targeting.

So why don’t we just write all this stuff down somewhere for our tools to read?
We do this with semantic versioning and it totally works! We agree that we
should stop writing versions for people and start writing them for machines so
that our tools can let us know before we even attempt to use them whether our
libraries are compatible with each other. Why not do the same with
where we want them to run?

This idea found its way into my head almost a year ago when I began using
Elm. I was interested in expanding the Elm platform to
Electron so that folks could get the benefits of Elm in their desktop apps.
This seemed to fit into the larger narrative at the time of how Elm was going
to support the web platform in general, and I asked a question. Elm’s creator,
Evan Czaplicki, gave a really interesting answer that I’ve been thinking about
ever since.

Luke: Will the idea of a set of platform bindings for “the web platform”
extend to similarly embedded bindings for other environments where Elm might
thrive as a compile-to-js language? The specific example I mean to reference is
Electron, in my opinion an incredible use-case for Elm, which ships with
embedded APIs that augment the chromium wrapper and are not implementable in
Elm. I’ve begun working on these bindings at https://github.com/elm-electron and
I’m not sure from the above what the future status of this project might be
since it doesn’t quite fit into one side or the other (web platform vs.
rewritable in Elm).

Evan: Luke, I agree that that is an important thing to
support. Other examples include elm-webgl and node. Basically other “platforms”
in my mind. This gets us into a world where a certain package may only work on
certain “platforms”, so it is not enough to solve constraints, we need to label
each package with the platforms it needs. So I’d like to get one platform sorted
out and see what that looks like before building in language support for many
platforms. I think those are two separate and difficult problems, and if I try
both at once, I’m probably going to have a bad time. If there is more to say,
let’s discuss it in a separate thread.

Source

This thread is now very outdated and much of the greater context
has since changed. But the idea has stuck with me and I do think there
is more to say, but not just with regard to Elm. To me this is best exemplified
by a different feature of Elm: enforcement of constraints on things from types
in code to package versions prior to execution.

That is to say: if a thing isn’t going to work, and my tools can find out about
that before I try and do said thing, I want them to tell me right then. We get
this with Elm’s type system, and we get it with Elm’s package manager when it
comes to package versions. We can’t run broken code because the compiler tells
us, not the runtime. And we can’t install packages in a broken way because the
package manager tells us, not the runtime. In my opinion, this is one of the
most important design features of the Elm platform and an enormous source of
productivity. Just let the compiler guide you to correct code!

So why can I install elm-lang/mouse, use it in an Elm program, compile that
program, run it in node, and see lots of errors at runtime? Well, because Elm
isn’t meant for node. It just happens to compile to JS and I can run JS in node.
But more to the point, there’s nothing built into either node or elm-package to
say “this code isn’t meant to work here”.

And now back to knex. Why can I install knex, add it to a webpack bundle to do
Web SQL stuff, run it in Firefox, and watch it fail at runtime? It’s not knex’s
fault that Firefox doesn’t support Web SQL, nor is it necessarily the
responsibility of the maintainer to clarify and document it. The knex library
doesn’t depend on Web SQL, but you can use it that way.

I think the problem is that we don’t offer enough information to our tools to
let them catch our mistakes. We can’t ask the maintainer of knex to add code to
detect when it’s being used in a browser that doesn’t support Web SQL. And we
also can’t ask the maintainers of webpack to develop NLP features so that
webpack can read our libraries’ docs and determine if they will run. What we
need is to meet in the middle. It should be no more difficult for a library
author to declare compatibility than doing just that. Additional dimensions
beyond package version, in the spirit of semantic versioning, that make it clear
where something is meant to be used.

Before proposing anything I’ll set the constraint that I’m really only thinking
about solutions for JavaScript and things that use JavaScript as a VM. My reason
is that these are the things I work on, and I don’t know enough about other
stuff to know if this is even a problem or how it should be solved there. Let’s
just think about JS.

Step one could be a standard way of expressing a fully qualified environment.
This includes dimensions like browser vs. server. It includes things like which
browser vendor and browser version. It includes things like which server runtime
and which runtime version. It includes a way to register all of these things so
that if one day there ended up being a server runtime with the same name as a
browser there would be no conflicts. Lastly, it also includes augmented and
combined derivations of things like browser and server, so that things like
Electron can declare compatibility with the embedded node or Chromium version
as well as their own APIs.

Step two could be to determine where this information needs to be declared.
Does it go in the package.json or elm-package.json? Does it go in the webpack
config? Does it get embedded in transpilation results or directly in source code
like a "use" pragma? Wherever they go, they need to be in a place where our
package managers can use them in their constraint solvers and our preprocessors
can use them in their error reporting. Maybe putting them somewhere accessible
by VMs could be asking too much, but I don’t work on browsers or standards
committees at the moment so I don’t really know.

Semantic versioning does a great job of helping our tools understand the what
of our library combinations. Let’s not stop there. Let’s figure out a good way
of adding more dimensions than just version, to deal with the where.
Share your thoughts with me!

Posted by Brian Hicks on Mon, 11/21/2016 - 17:00

Last time we started making a set using a binary search tree.
Let’s continue by adding more functionality to our set!
We’re going to improve it by making sure our set stays in proper order.

Posted by Rundis on Mon, 11/21/2016 - 01:00

Another Elm release and it’s time for yet another upgrade post.
The changes outlined in the migration guide didn’t look
to intimidating, so I jumped into it with pretty high confidence. It took me about 2 hours to get through and it was almost an instant success.
The compiler had my back all along, helped by my editor showing errors inline and docs/signatures whenever I was in doubt.
I didn’t even have to resort to google once to figure out what to do. I said it almost worked the first time. Well
I had managed to add a http header twice which Servant wasn’t to impressed by, but once that was fixed everything was working hunky dory !

Useful resources

  • Check out the other episodes in this blog series.
  • The accompanying Albums sample app is on github, and there is a tag
    for each episode

Table of Contents

Introduction

The Albums app is about 1400 lines of Elm code, so it’s small, but still it might give you some pointers to the effort
involved when upgrading. With this upgrade I tried to be semi-structured in my commits so I’ll be referring to them as we go along.

Upgrade steps

Preconditions

Running elm-upgrade

For this release @avh4 and @eeue56 created the very
handy elm-upgrade util to ease the upgrade process.

To summarize what elm-upgrade does; It upgrades your project definition (elm-package.json) and it runs elm-format on your
code in "upgrade mode" so that most of the syntax changes in core is fixed.

It worked great ! Only snag I had was that it failed to upgrade elm-community/json-extra, but hey that was simple enough for me to do afterwords.

Here you can see the resulting diff.

Service API - Http and Json changes

Changing a simple get request

0.18
0.17

getArtist
: Int
-> (Result Http.Error Artist -> msg)
-> Cmd msg // <1>
getArtist id msg =
Http.get
(baseUrl ++ "/artists/" ++ toString id)
artistDecoder // <2>
❘> Http.send msg // <3>

1
We no longer have a separate msg for errors. Our msg type constructor
should now take a Result

2
The order of url and decoder has swapped

3
To send the request created in 2, we use the send function

getArtist
: Int
-> (Http.Error -> msg)
-> (Artist -> msg)
-> Cmd msg
getArtist id errorMsg msg =
Http.get artistDecoder (baseUrl ++ "/artists/" ++ toString id)
❘> Task.perform errorMsg msg

If you wish to keep the old behavior, you can convert a request to
a task using toTask

Changing a post request

0.18
0.17

createArtist
: ArtistRequest a
-> (Result Http.Error Artist -> msg)
-> Cmd msg
createArtist artist msg =
Http.post
(baseUrl ++ "/artists")
(Http.stringBody // <1>
"application/json"
<❘ encodeArtist artist)
artistDecoder
❘> Http.send msg

1
With 0.18 we can specify content-type for body and now we can actually use the post function ! Yay :-)

createArtist
: ArtistRequest a
-> (Result Http.Error Artist -> msg)
-> Cmd msg
createArtist artist errorMsg msg =
Http.send Http.defaultSettings
{ verb = "POST"
, url = baseUrl ++ "/artists"
, body = Http.string (encodeArtist artist)
, headers =
[ ( "Content-Type"
, "application/json"
)
]
}
❘> Http.fromJson artistDecoder
❘> Task.perform errorMsg msg

Changing a put request

0.18
0.17

updateArtist
: Artist
-> (Result Http.Error Artist -> msg)
-> Cmd msg
updateArtist artist msg =
Http.request // <1>
{ method = "PUT"
, headers = [] // <2>
, url = baseUrl
++ "/artists/"
++ toString artist.id
, body = Http.stringBody
"application/json"
<❘ encodeArtist artist
, expect = Http.expectJson
artistDecoder // <3>
, timeout = Nothing
, withCredentials = False
}
❘> Http.send msg

1
Rather than just passsing a record we use the request function
to gain full control of the request creation

2
We don’t need to specify the content header here, because we specify that
when creating the body

3
We configure the request to expect a json response providing it with our json decoder

updateArtist
: Artist
-> (Http.Error -> msg)
-> (Artist -> msg)
-> Cmd msg
updateArtist artist errorMsg msg =
Http.send Http.defaultSettings
{ verb = "PUT"
, headers =
[ ( "Content-Type"
, "application/json"
)
]
, url = baseUrl
++ "/artists/"
++ toString artist.id
, body = Http.string (encodeArtist artist)
}
❘> Http.fromJson artistDecoder
❘> Task.perform errorMsg msg

Changing Json Decoding

0.18
0.17

albumDecoder : JsonD.Decoder Album
albumDecoder =
JsonD.map4 Album (1)
(JsonD.field "albumId" JsonD.int) (2)
(JsonD.field "albumName" JsonD.string)
(JsonD.field "albumArtistId" JsonD.int)
(JsonD.field "albumTracks"
<❘ JsonD.list trackDecoder)

1
You can use the map<n> functions to map several fields

2
Infix syntax has been removed in favor of the explicit field function

albumDecoder : JsonD.Decoder Album
albumDecoder =
JsonD.object4 Album
("albumId" := JsonD.int) JsonD.int)
("albumName" := JsonD.string)
("albumArtistId" := JsonD.int)
("albumTracks" := JsonD.list trackDecoder)

You can view the complete diff for the Service Api here.
(Please note that the headers for the put request should not be there, fixed in another commit)

Handling the Service API changes

We’ll use the artistlisting page as an example for handling the api changes.
The big change is really that the messages has changed signature and we can remove a few.

Msg type changes

0.18
0.17

type Msg
= Show
❘ HandleArtistsRetrieved
(Result Http.Error (List Artist)) (1)
❘ DeleteArtist Int
❘ HandleArtistDeleted
(Result Http.Error String)

1
We handle the success case and failure case with the same message using the Result type

type Msg
= Show
❘ HandleArtistsRetrieved (List Artist)
❘ FetchArtistsFailed Http.Error
❘ DeleteArtist Int
❘ HandleArtistDeleted
❘ DeleteFailed

Changes to the update function

0.18
0.17

update : Msg -> Model -> ( Model, Cmd Msg )
update action model =
case action of
Show ->
( model, mountCmd )

HandleArtistsRetrieved res ->
case res of
Result.Ok artists -> (1)
( { model ❘ artists = artists }
, Cmd.none
)

Result.Err err -> (2)
let _ =
Debug.log "Error retrieving artist" err
in
(model, Cmd.none)

DeleteArtist id ->
( model
, deleteArtist id HandleArtistDeleted
)

HandleArtistDeleted res ->
case res of
Result.Ok _ ->
update Show model

Result.Err err ->
let _ =
Debug.log "Error deleting artist" err
in
(model, Cmd.none)

1
Handling the success case is similar to how we did in 0.17

2
Poor man’s error handling…​ don’t do this for realz !

update : Msg -> Model -> ( Model, Cmd Msg )
update action model =
case action of
Show ->
( model, mountCmd )

HandleArtistsRetrieved artists ->
( { model ❘ artists = artists }
, Cmd.none
)

FetchArtistsFailed err ->
( model, Cmd.none )

DeleteArtist id ->
( model
, deleteArtist
id
DeleteFailed
HandleArtistDeleted )

HandleArtistDeleted ->
update Show model

DeleteFailed ->
( model, Cmd.none )

The diffs for the various pages can be found here:

Handling changes to url-parser

The url-parser package has had a few changes.
Let’s have a closer look

0.18
0.17

routeParser : Parser (Route -> a) a
routeParser =
UrlParser.oneOf
[ UrlParser.map Home (s "") (1)
, UrlParser.map
NewArtistPage (s "artists" </> s "new")
, UrlParser.map
NewArtistAlbumPage
(s "artists"
</> int
</> s "albums"
</> s "new")
, UrlParser.map
ArtistDetailPage
(s "artists" </> int)
, UrlParser.map
ArtistListingPage
(s "artists")
, UrlParser.map
AlbumDetailPage
(s "albums" </> int)
]

decode : Location -> Maybe Route (2)
decode location =
UrlParser.parsePath
routeParser location (3)

1
Consistency matters, format is now map !

2
Rather than returning a Result, we now return a Maybe

3
You can use parsePath and/or parseHash to parse the url. For our case
parsePath is what we need here.

routeParser : Parser (Route -> a) a
routeParser =
oneOf
[ format Home (s "")
, format
NewArtistPage
(s "artists" </> s "new")
, format
NewArtistAlbumPage
( s "artists"
</> int
</> s "albums"
</> s "new"
)
, format
ArtistDetailPage
(s "artists" </> int)
, format
ArtistListingPage (s "artists")
, format
AlbumDetailPage (s "albums" </> int)
]

decode : Location -> Result String Route
decode location =
parse
identity
routeParser
(String.dropLeft 1 location.pathname)

Handling changes to Navigation in Main

Changing the main function

0.18
0.17

main : Program Never Model Msg (1)
main =
Navigation.program UrlChange (2)
{ init = init
, view = view
, update = update (3)
, subscriptions = \_ -> Sub.none
}

1
The function signature for main has become more specific
(probably triggered by the introduction of the debugger)

2
We now supply a message constructor for url changes. This message
is passed into our update function as any other message. Nice !

3
The urlUpdate field is gone, all updates flows through our provider update function

main : Program Never
main =
Navigation.program
(Navigation.makeParser Routes.decode)
{ init = init
, view = view
, update = update
, urlUpdate = urlUpdate
, subscriptions = \_ -> Sub.none
}

Changing the init function

0.18
0.17

init : Navigation.Location -> ( Model, Cmd Msg )
init loc =
update (UrlChange loc) initialModel

We get the initial url passed as a Location to the init function.
We just delegate to the update function to handle the url to load the appropriate
page.

init : Result String Route -> ( Model, Cmd Msg )
init result =
urlUpdate result initialModel

Changing the main update function

0.18
0.17

update : Msg -> Model -> ( Model, Cmd Msg )
update msg model =
-- .. everything else the same really, exept;

UrlChange loc -> (1)
urlUpdate loc model

urlUpdate (2)
: Navigation.Location
-> Model
-> ( Model, Cmd Msg )
urlUpdate loc model =
case (Routes.decode loc) of (3)
Nothing -> (4)
model ! [ Navigation.modifyUrl
(Routes.encode model.route) ]

Just (ArtistListingPage as route) -> (5)
{ model ❘ route = route }
! [ Cmd.map
ArtistListingMsg
ArtistListing.mountCmd ]

-- etc for the rest of the routes

1
We have a new case for the UrlChange Msg we provided in the main function
We just delegate to our exising urlUpdate function (more or less)

2
We’ve changed the signagure to receive a Location rather that are result

3
Routes.decode return a Maybe so we pattern match on the result

4
If parsing the url was unsuccessful we change the url to our default url (provided by initialModel, the first time otherwise it will change the url back to the previously successful one)

5
When successful we change the url and initialize the appropriate route (/page)

update : Msg -> Model -> ( Model, Cmd Msg )
update msg model =
case msg of

-- .. etc

urlUpdate
: Result String Route
-> Model
-> ( Model, Cmd Msg )
urlUpdate result model =
case result of
Err _ ->
model ! [ Navigation.modifyUrl
(Routes.encode model.route) ]

Ok (ArtistListingPage as route) ->
{ model ❘ route = route }
! [ Cmd.map
ArtistListingMsg
ArtistListing.mountCmd ]

-- etc for the reset of the routes

You can see the complete diff here

Summary

Obviosuly there were quite a few changes, but none of the were really that big and to my mind all of the changed things for the better.
Using elm-upgrade and the upgrade feature in elm-format really helped kick-start the conversion, I have great hopes for this getting even better in the future.

I haven’t covered the the re-introduction of the debugger in elm-reactor, which was the big new feature in Elm 0.18.

In addition to Elm 0.18 being a nice incremental improvement, it has been great to see that the community
has really worked hard to upgrade packages and helping out making the upgrade as smooth as possible. Great stuff !

A little mind-you that even though this simple app was easy to upgrade that might not be the case for you. But stories
I’ve heard so far has a similar ring to them. I guess the biggest hurdle for upgrading is dependency on lot’s of third-party packages
that might take some time before being upgraded to 0.18. Some patience might be needed.

Posted by dennisreimann on Thu, 11/17/2016 - 13:00

Some very good and useful plugins that will enhance your Elm editing in Atom.

Posted by dennisreimann on Mon, 11/14/2016 - 12:00

A list of tools and resources I found valuable when working with Elm. It contains useful tools that will help in your day to day work and links to learn Elm as well as to deepen your knowledge.