My experience in JSCONF Belgium 2017.

This Thursday 29th June the conference was celebrated in the beautiful city of Brugge, and I had the chance of going there! I had already taken a look at the speakers and I was interested in at least four talks so it was more than worth going.

The conferences started in the afternoon and where structured in two tracks not following any topic in particular, so I had to sacrifice some.


There was a big delay on the trains so I couldn't watch this piece in it's whole, the speaker Peter Paul Koch was already halfway through his talk when I arrived. The remaining of the keynote was extremely interesting.

He talked about the all well known obesity of the web, pinpointing how the indiscriminated use of frameworks, build tools, libraries, was contaminating the web development environment, generating heavier and heavier websites that would take several seconds to download and render in high end devices, making almost impossible to access these sites to a big part of the world's population, having access to low end devices, unstable and sloppy network connections and not the highest throughput environments that we are used to in first world countries.

All this bloat is partially originated due to the advent of web developers trying to emulate the fully functionality of native desktop applications using web technologies.

Also, the uncertainty that frontend developers have to deal with when developing software in such an aggressive and hostile environment as the web was brought to the table. Maybe the reason why the so called javascript fatigue is partly originated because we frontend developers try to be taken seriously by overcreating tooling and patterns that exponentially increase the complexity of software.

Every idea was reflected in a deep and encouraging attitude. Far from being a rant, it was inspiring and full of hope towards the web. I very much enjoyed it I am already following the speaker to see what he will be up to.

Reactive Programming By Example.

A high level introduction to Reactive Programming by a couple of speakers: Lander Verhack and his boss. Using a funny and engaging way of question-answer format between them they gave a very simple overview on what is reactive programming by creating a fake page that would provide a simple search engine to query the Spotify API and show a list of songs, artists and albums.

At work now I work primarily with Ember.js, so the concepts explained in this talk reminded me very closely of the ember observers and computed properties, built on top of observers.

The Era of Module Bundlers.

Arun DSouza provided a walkthrough accross task runners, build tools, and module bundlers, enumerating the most commonly used ones, either today or historically: grunt, gulp, browserify and webpack were several examples.

At the end he focused on webpack for being the most complete tool, incorporating task running, bundling, minifying, and several other tasks in a single environment. Despite of being intersting, this talk did not provide me anything new other than learning some specifics of Webpack in order to do this or that task.

How AI saved my life.

I really really enjoyed this talk. Nick Trogh, evangelist in Microsoft BeLux gave an introduction to the super cool Microsoft Cognitive Services API in Azure and using a website gave examples on how users can interact with this API.

The first example made use of the Text Analytics API to extract information about the sentiment, the subjective opinion of the emitter of the text. Being closer to zero negative and to one positive.

Then the Computer Vision API came along. Providing an initial small set of example images from a football team, the API detected if a new person belonged to that football team, also their age, emotion (Emotion API), gender, and a range of other parameters. All with a very close precision. It was amazing, there is a lot of potential on opening this kind of services to be used by third parties.

Enterprise Javascript.

I think one of my tweets pretty much summarizes the general idea: Oracle bundled jQuery+knockout.js+require.js+cordova = boom, OracleJET. We will see the usage in the future!

Yet another frontend javascript framework (Peter Koch warned us!) to add to our collection. Oracle jumped on the frontend train with an open source hotchpotch of existing technologies wanting to generate community around it.

It has several interesting points: it wants to adapt Web Components, it is responsive by default, it provides internationalization out of the box... Only time will say where this new tool falls in.

How I hacked my coffee machine with JS.

The time the speker spent talking was soaking wet in pure awesomeness. Dominik Kundel explained the empowering process of being initially bored and deciding to crack open your flatmate's coffee machine to reverse engineer the microcontroller and learn how the buttons worked and communicated within.

Once having done that he could hook a Tessel microcontroller to the machine and develop a small service to allow him to start brewing coffee remotely.

Really cool home hacking project!


The overall level of the talks was introductory but I enjoyed almost all the talks very much. The one explaining Microsoft Cognitive Services API and the Coffee Machine Hacking were on the top, and also sparked my interest in Reactive Programming, I will have to look into that too.

Overall, it has been fun and enriching to attend, I bring several ideas to test at home. As a fun fact, looking around I didn't find a single linux user in the whole conference, I think I just saw a guy with emacs open! :P

Types of dependencies in NPM.


When we start a new javascript project and we count on adding a minimal level of complexity to it, we will eventually need to make use of external libraries if we want to avoid writing every feature totally from scratch. For that matter, you can choose to take three approaches:

  • Download the library code bundle and adapt it to your development and deployment process. This is easy if the library is very small, you don't care about updating it's version, and you know very specifically what you want from it.
  • Use a Content Delivery Network to access a remote copy of the library hosted in the cloud.
  • Use a tool like NPM, Bower, or the new kid in town Yarn. This are package managers: pieces of software that allow the developer manage the projet's dependencies by fetching them from a central repository, specify each library's desired version and execute scripts for different steps of the development process.

When you are working on a project and decide to use npm as it's package manager, it will use a file named package.json in the root of the project's directory. This file contains the information about the dependencies on external libraries it has, along with other useful information. It sits as the point of reference that npm will use when fetching and updating your dependencies. More information about the package.json file [here](link to package.json explanation).

How does it work.

Very briefly explained, npm will read the contents of your package.json file and will enumerate the libraries your current project depends on. It will then fetch each library in a recursive manner, namely, each one of those libraries may contain themselves a package.json file enumerating the dependencies that library relies upon, so it'll fetch them again, and so on, and so on.

Dependencies are expressed in a hash with a pair of

  "package_name_1": "version range",
  "package_name_2": "version range",

per each of them. The version range can be a string indicating the version range to fetch from the central repository, a tarball or a git url pointing to a specific tag or commit.

There are different ways inside the package.json file to indicate the nature of the dependencies: dependencies, devDependencies and peerDependencies.
You can also find bundledDependencies and optionalDependencies, but I do not see them used that often.


These are the libraries your project directly depends on in order to properly execute. There is a direct dependency from the correct functioning of your application to the fetching of this packages. After compiling and minifying your javascript, the final bundle must include these libraries code to work.


They represent the packages needed during the development of your project, such as tools and frameworks, that are not needed in the final product. An example for a project developed with the Ember javascript framework, could be:

"devDependencies": {
  "ember-cli": "2.11.1",
  "ember-cli-sass": "^5.3.1",
  • ember-cli is the command line tool that helps the creation of blueprints for the project, run scripts, etc.. much as the rails cli tool.
  • ember-cli-sass contains files to help translate the sass code into css code.

As you can see, a final user of your product is not interested to have this auxiliary tools in the final bundle.


peerDependencies are used to avoid name clashes and application crashes when a package depends on a library's version that is not the same as the specified one in the project's package.json. For example:

  "name": "test_peer",
  "version": "1.0.0",
  "description": "",
  "main": "index.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  "author": "",
  "license": "ISC",
  "dependencies": {
    "grunt-webpack": "^3.0.0",
    "webpack": "^1.0.0"

I specify version "1" of webpack and also install grunt-webpack. grunt-webpack states a different webpack version in it's peerDependencies hash:

"peerDependencies": {
  "webpack": "^2.1.0-beta || ^2.2.0-rc || 2 || 3"

The result of npm install this project is:

test_peer@1.0.0 /home/esteban/gits/test/test_peer
├─┬ grunt-webpack@3.0.0
│ ├─┬ deep-for-each@1.0.6
│ │ └─┬ is-plain-object@2.0.3
│ │   └── isobject@3.0.0
│ └── lodash@4.17.4
└─┬ UNMET PEER DEPENDENCY webpack@1.0.0

... etc ...

This means that while in my project I downloaded version 1 of webpack, grunt-webpack depends on another set of versions to safely execute, so the package manager will intercede for us and warn us that this setup may break the program.


They can also be written as bundleDependencies. They enumerate the dependencies that must be included in your project just as regular dependencies. They will also be included in the tarball generated as the result of executing npm pack.

They are used for example if you want to include third party libraries that are not part of the npm registry, or distribute your project as a module.


Just like regular dependencies, but builds don't fail if the dependencies are not met.

That's it, I hope it was a little bit more clarifying for readers!