Thursday, 2 April 2015

Hack @ EF, TALKIEN.IO is born

Hack @ EF, TALKIEN.IO is born

Recently I had an opportunity to participate on a hackathon organized by Enterpreneur First. I really want to thank the guys at EF for the opportunity and great weekend! For those who don't know, EF is an investor who invests in talents, entrepreneurs. I really recommend everybody who is into startup world applying for their next cohort. 

It was also a great weekend because I met Miika, Pyry and Pavel. We formed a team to create something cool in 24 hours. It was a huge fun and an honour to work with these guys!


Pavel, me, Miika, Pyry @ EF


We created a browser based application which listens to you while you speak and shows relevant pictures and information instantly.


As we really enjoyed working with each other, we continued the work and published it, so here is TALKIEN.IO


We are still searching for a good use case, so any feedback is appreciated! :)

Sunday, 18 January 2015

Creating a graph of software technologies with Docker, Neo4j, NodeJS and D3.js Part 2

The plan

In the previous part we created a running Neo4j instance inside a Docker container.
Now we can create a middleware which will call the database's REST API, on the bottom of it's layer and expose an API to the client on the top of it's layer. As we did with the database instance, we will put the Node(s) into containers as well.

Ok, so let's create a Docker image for Node first.

The Node image

The good news is that there is an official Node image on Docker Hub. (Be aware, that does not mean we are super secure, it just means that this image should work properly). Let's check it's Dockerfile.
What we can see is that NPM is installed, so we can manage Node packages easily. Also if we run this image, the Node interpreter will be executed and will be waiting for input.
We are able to create our own image with a running Node + the packages we need.

We could put lots of "RUN npm install [package]" commands inside a Dockerfile, but instead of doing that, we will create package.json file and run "npm install" on it. It will be an external resource. If we run npm install, a node_modules directory will be created. It could be in a separate Docker image, and we could build a new image whenever we change the package.json, but I don't think it makes it too robust by putting the Node instance and the packages together into the same Docker image.

Ok let's do it.
Create the package.json file first.
ogi@ubuntu:~/techgraph$ mkdir webapp
ogi@ubuntu:~/techgraph$ cd webapp/
ogi@ubuntu:~/techgraph/webapp$ touch package.json
I am using Express.js as a middleware to help me with the routing and HTTP requests, responses. There are loads of other middlewares to try out, but Express suits perfectly for now. So we will put Express into our package.json as a dependency.
I am also using the request Node module to make POST requests to our Neo4j instance.

{
"name": "techgraph",
"description": "A graph of technology connections",
"version": "0.0.1",
"author": {
"name": "Ognjen Bubalo",
"email": "ognjen.bubalo@gmail.com"
},
"dependencies": {
"express": "^4.10.4",
"request": "^2.51.0"
}
}
view raw nodePackageJson hosted with ❤ by GitHub

Ok, now an app.js file.
ogi@ubuntu:~/techgraph/webapp$ touch app.js
view raw touchAppjs hosted with ❤ by GitHub
I'll leave it empty for now.

Now next to the package.json let's create the Dockerfile.
FROM node:0.10-onbuild
ADD package.json /
ADD app.js /
RUN npm install
EXPOSE 3000
view raw nodeDockerfile hosted with ❤ by GitHub
So while building an image from this Dockerfile, a package.json and an app.js file will be added, and npm install will be called. It will create a node_modules directory and download into it the needed packages.

So, let's build it!
ogi@ubuntu:~/techgraph/webapp$ sudo docker build -t webapp .
[sudo] password for ogi:
[info] POST /v1.15/build?rm=1&t=webapp
[60fa92d5] +job build()
Sending build context to Docker daemon 5.632 kB
Sending build context to Docker daemon
Step 0 : FROM node:0.10-onbuild
# Executing 3 build triggers
Trigger 0, COPY package.json /usr/src/app/
Step 0 : COPY package.json /usr/src/app/
Trigger 1, RUN npm install
Step 0 : RUN npm install
---> Running in a5b81efe0636
[info] No non localhost DNS resolver found in resolv.conf and containers can't use it. Using default external servers : [8.8.8.8 8.8.4.4]
[60fa92d5] +job allocate_interface(a5b81efe0636ab568bd93bbcbda64c6d1c2ff2a8c7baebf9b6572a95294093df)
[60fa92d5] -job allocate_interface(a5b81efe0636ab568bd93bbcbda64c6d1c2ff2a8c7baebf9b6572a95294093df) = OK (0)
[60fa92d5] +job log(start, a5b81efe0636ab568bd93bbcbda64c6d1c2ff2a8c7baebf9b6572a95294093df, 6e61fea2084d)
[60fa92d5] -job log(start, a5b81efe0636ab568bd93bbcbda64c6d1c2ff2a8c7baebf9b6572a95294093df, 6e61fea2084d) = OK (0)
npm WARN package.json techgraph@0.0.1 No repository field.
npm WARN package.json techgraph@0.0.1 No README data
express@4.10.4 node_modules/express
├── merge-descriptors@0.0.2
├── utils-merge@1.0.0
├── fresh@0.2.4
├── cookie@0.1.2
├── escape-html@1.0.1
├── range-parser@1.0.2
├── cookie-signature@1.0.5
├── finalhandler@0.3.2
├── vary@1.0.0
├── media-typer@0.3.0
├── methods@1.1.0
├── parseurl@1.3.0
├── serve-static@1.7.1
├── content-disposition@0.5.0
├── path-to-regexp@0.1.3
├── depd@1.0.0
├── qs@2.3.3
├── on-finished@2.1.1 (ee-first@1.1.0)
├── debug@2.1.0 (ms@0.6.2)
├── etag@1.5.1 (crc@3.2.1)
├── proxy-addr@1.0.4 (forwarded@0.1.0, ipaddr.js@0.1.5)
├── send@0.10.1 (destroy@1.0.3, ms@0.6.2, mime@1.2.11)
├── type-is@1.5.3 (mime-types@2.0.3)
└── accepts@1.1.3 (negotiator@0.4.9, mime-types@2.0.3)
[60fa92d5] +job log(die, a5b81efe0636ab568bd93bbcbda64c6d1c2ff2a8c7baebf9b6572a95294093df, 6e61fea2084d)
[60fa92d5] -job log(die, a5b81efe0636ab568bd93bbcbda64c6d1c2ff2a8c7baebf9b6572a95294093df, 6e61fea2084d) = OK (0)
[60fa92d5] +job release_interface(a5b81efe0636ab568bd93bbcbda64c6d1c2ff2a8c7baebf9b6572a95294093df)
[60fa92d5] -job release_interface(a5b81efe0636ab568bd93bbcbda64c6d1c2ff2a8c7baebf9b6572a95294093df) = OK (0)
Trigger 2, COPY . /usr/src/app
Step 0 : COPY . /usr/src/app
---> b87d168baa02
Removing intermediate container 796ba98d68fb
Removing intermediate container a5b81efe0636
Removing intermediate container e03e21971d9d
Step 1 : ADD package.json /
---> f70e4073b657
Removing intermediate container 9ae8b0c3f5ed
Step 2 : ADD app.js /
---> d78c048a2199
Removing intermediate container 4149681ef68b
Step 3 : RUN npm install
---> Running in b169fd1ed87c
[info] No non localhost DNS resolver found in resolv.conf and containers can't use it. Using default external servers : [8.8.8.8 8.8.4.4]
[60fa92d5] +job allocate_interface(b169fd1ed87c3be1793217ce6eb71a31d9cdbbcf00d8f84d33c4d8eff693eced)
[60fa92d5] -job allocate_interface(b169fd1ed87c3be1793217ce6eb71a31d9cdbbcf00d8f84d33c4d8eff693eced) = OK (0)
[60fa92d5] +job log(start, b169fd1ed87c3be1793217ce6eb71a31d9cdbbcf00d8f84d33c4d8eff693eced, d78c048a2199)
[60fa92d5] -job log(start, b169fd1ed87c3be1793217ce6eb71a31d9cdbbcf00d8f84d33c4d8eff693eced, d78c048a2199) = OK (0)
npm WARN package.json techgraph@0.0.1 No repository field.
npm WARN package.json techgraph@0.0.1 No README data
[60fa92d5] +job log(die, b169fd1ed87c3be1793217ce6eb71a31d9cdbbcf00d8f84d33c4d8eff693eced, d78c048a2199)
[60fa92d5] -job log(die, b169fd1ed87c3be1793217ce6eb71a31d9cdbbcf00d8f84d33c4d8eff693eced, d78c048a2199) = OK (0)
[60fa92d5] +job release_interface(b169fd1ed87c3be1793217ce6eb71a31d9cdbbcf00d8f84d33c4d8eff693eced)
[60fa92d5] -job release_interface(b169fd1ed87c3be1793217ce6eb71a31d9cdbbcf00d8f84d33c4d8eff693eced) = OK (0)
---> 78cc6e097fc1
Removing intermediate container b169fd1ed87c
Step 4 : EXPOSE 3000
---> Running in 1cdc1ed198a5
---> 1eac9601fffe
Removing intermediate container 1cdc1ed198a5
Successfully built 1eac9601fffe
[60fa92d5] -job build() = OK (0)
You can see it from the log that it downloaded what we need. Coolio!

The code

Now let's create a simple application which will handle adding a new technology, adding a new connection between two technologies and getting the adjacent technologies of a particular tech. Later we can add some logic for setting weights for connections to visualize how strong the connection is, and we can add weights to the techs as well to show how popular they are.

These are the API endpoints

POST:
/api/tech/new?name=[techname]

POST:
/api/connection/new?A=[Atechname]&B=[Btechname]&connection=[connection]

GET:
/api/tech/all

Let's open our app.js empty file and add our code.
The code is simple. Of course we would like to put the API endpoints, the function's and the configuration into separate files (modules) later, but in this stage we want to have simple solutions.
var http = require('http');
var express = require('express');
var querystring = require('querystring');
var request = require('request');
var app = express();
app.get('/', function (req, res) {
res.send('Hello World!')
});
app.post('/api/tech/new', function (req, res) {
var options = {
uri: 'http://' + process.env.DB_PORT_7474_TCP_ADDR + ':7474' + '/db/data/transaction/commit',
method: 'POST',
json: {statements : [ {statement : 'CREATE (n:Technology { name:"'+ req.param("name") + '" });' } ]}
};
request(options, function (error, response, body) {
if (!error && response.statusCode == 200) {
res.send(200);
} else {
res.send(500);
}
});
});
app.post('/api/connection/new', function (req, res) {
var options = {
uri: 'http://' + process.env.DB_PORT_7474_TCP_ADDR + ':7474' + '/db/data/transaction/commit',
method: 'POST',
json: {statements : [ {statement : 'MATCH (n1:Technology) WHERE n1.name = "'
+ req.param("A")
+'" MATCH (n2:Technology) WHERE n2.name = "'
+ req.param("B") + '" CREATE (n1)-[:' + req.param("connection") + ']->(n2);' } ]}
};
request(options, function (error, response, body) {
if (!error && response.statusCode == 200) {
res.send(200);
} else {
res.send(500);
}
});
});
app.get('/api/tech/all', function (req, res) {
var options = {
uri: 'http://' + process.env.DB_PORT_7474_TCP_ADDR + ':7474' + '/db/data/transaction/commit',
method: 'POST',
json: {statements : [ {statement : 'MATCH (n1:Technology) WHERE n1.name="'+ req.param("technology") + '" MATCH (n2:Technology) RETURN (n1)-[*1]-(n2);' } ]}
};
request(options, function (error, response, body) {
if (!error && response.statusCode == 200) {
res.send(response.body);
} else {
res.send(500);
}
});
});
var server = app.listen(3000, function () {
var host = server.address().address
var port = server.address().port
console.log('Example app listening at http://%s:%s', host, port)
})
view raw webappJS hosted with ❤ by GitHub

Ok. Now we need to rebuild our Docker image.

ogi@ubuntu:~/techgraph/webapp$ sudo docker build -t webapp .
Sending build context to Docker daemon 6.144 kB
Sending build context to Docker daemon
Step 0 : FROM node:0.10-onbuild
# Executing 3 build triggers
Trigger 0, COPY package.json /usr/src/app/
Step 0 : COPY package.json /usr/src/app/
---> Using cache
Trigger 1, RUN npm install
Step 0 : RUN npm install
---> Using cache
Trigger 2, COPY . /usr/src/app
Step 0 : COPY . /usr/src/app
---> 3fa0ef35fbcb
Removing intermediate container ec0171e665fe
Step 1 : ADD package.json /
---> 5f7b89b4965c
Removing intermediate container b9edbb543490
Step 2 : ADD app.js /
---> 813c42f3fd7d
Removing intermediate container 5967a6cd24ad
Step 3 : RUN npm install
---> Running in 006e852a5774
npm WARN package.json techgraph@0.0.1 No repository field.
npm WARN package.json techgraph@0.0.1 No README data
---> 87d30e82b064
Removing intermediate container 006e852a5774
Step 4 : EXPOSE 3000
---> Running in 474b6698c7cb
---> 8ad13a3bc1f3
Removing intermediate container 474b6698c7cb
Successfully built 8ad13a3bc1f3
ogi@ubuntu:~/techgraph/webapp$

And run our web application. (Don't forget to stop-remove the old container.)
ogi@ubuntu:~/techgraph/webapp$ sudo docker run --link neo4j:db -i -t -d -p 3000:3000 --name webapp webapp node app
3b65c41ec751256dc3eefdc3afb1fa092566afa3b72c873a4471496b08110e80
view raw runWebapp hosted with ❤ by GitHub

Notice that we are using --link neo4j:db. This creates some environment variables for us that we can use in our webapp for networking purpose.

Just because we are curious let's check these variables in our running webapp container.
Cool! We see that now we don't have to care what is the exact IP address of our Neo4j container. Docker will handle this for us, we just need to use the env variable.

Now we can create some technology nodes with curl or alternative tool for doing HTTP requests. I will execute these requests just to try it out:
ogi@ubuntu:~/techgraph/webapp$ sudo docker exec -i -t webapp bash
root@3b65c41ec751:/usr/src/app# printenv
NODE_VERSION=0.10.33
DB_PORT_1337_TCP_PROTO=tcp
DB_PORT_7474_TCP_PORT=7474
HOSTNAME=3b65c41ec751
DB_NAME=/webapp/db
DB_ENV_JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64
TERM=xterm
DB_PORT=tcp://172.17.0.25:1337
DB_PORT_7474_TCP_ADDR=172.17.0.25
DB_PORT_1337_TCP_ADDR=172.17.0.25
DB_PORT_1337_TCP_PORT=1337
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/usr/src/app
DB_PORT_1337_TCP=tcp://172.17.0.25:1337
NPM_VERSION=2.1.9
DB_PORT_7474_TCP_PROTO=tcp
SHLVL=1
DB_PORT_7474_TCP=tcp://172.17.0.25:7474
_=/usr/bin/printenv
root@3b65c41ec751:/usr/src/app#
  1. POST: http://0.0.0.0:3000/api/tech/new?name=HTML
  2. POST: http://0.0.0.0:3000/api/tech/new?name=FLASH
  3. POST: http://0.0.0.0:3000/api/connection/new?A=HTML&B=FLASH&RELATION=IS_RELATED_TO
  4. GET: http://0.0.0.0:3000/api/tech/all?technology=HTML
The last query returns  this JSON:
{"results":[{"columns":["(n1)-[*1]-(n2)"],"data":[{"row":[[[{"name":"HTML"},{},{"name":"FLASH"}]]]},{"row":[[[{"name":"HTML"},{},{"name":"FLASH"}]]]},{"row":[[]]},{"row":[[]]},{"row":[[]]},{"row":[[]]},{"row":[[]]},{"row":[[]]},{"row":[[]]},{"row":[[]]},{"row":[[]]},{"row":[[]]},{"row":[[]]},{"row":[[]]},{"row":[[]]},{"row":[[]]},{"row":[[]]},{"row":[[]]},{"row":[[]]},{"row":[[]]},{"row":[[]]},{"row":[[]]},{"row":[[]]},{"row":[[]]},{"row":[[]]},{"row":[[]]},{"row":[[]]},{"row":[[]]},{"row":[[]]},{"row":[[]]},{"row":[[]]},{"row":[[]]},{"row":[[]]},{"row":[[]]},{"row":[[]]},{"row":[[]]},{"row":[[]]},{"row":[[]]},{"row":[[]]},{"row":[[]]},{"row":[[]]},{"row":[[]]}]}],"errors":[]}


In the next part we will add the client code and actually draw our graph.

Friday, 9 January 2015

Meetups: Selling to schools

 Selling to schools
Jan 2015

I wanted to learn about how to approach a school with a product (software application).

The key things I learned and did not forget :) :


  • teachers, schools live their own life, so don't expect they will find always time doing online chats with you, probably you will have to travel
  • getting 10-100 schools on board is not easy at all but manageable, above 100 it is extremely hard
  • the UX has to be perfect, don't try to save energy on it
  • the online marketing through e-mails is still (40 times) more accurate then using social networks
  • schedule e-mails in early mornings between Tue-Thu
  • use celebs, famous people to do tweets for you
  • ask the teachers for feedback, quotes, they won't give you without asking
  • involve the users ASAP after you have something working and STABLE
  • be patient and kind!


Anyone who is interested more check out this meetup, and the presenters. They are doing this professionally, really recommended.

Check out the meetup here.
http://www.meetup.com/EdTech-Developers-Meetup/events/218818932/



Thanks to the organizers!

Meetups: Docker London

 Docker London
Jan 2015


I've never been on a Docker meetup, so I was curious and so were lots of others. 
The organizers created a good event. The order of the presentations was perfect. I wish all the meetup organizers would think about how to max out the opportunity in presentations by finding a balance. (So people won't be bored, but won't start loosing attention because of too many details.)

The order was good because Andrew's presentation was a good intro and sum how can Docker fit into  a continuous delivery pipeline and how it is done in their company. Then Johan's presentation was more a demo about GCloud tool  (+ Kubernetes) and the Google Cloud platform which I think was good for those who already knew about tools around Docker, but did not try out Kubernetes. 
In the end Dan Williams's talk about his trip to found out how container ships work in real life was really great, and was really needed for the end of the day to chill out.

All of these talks were about things already done, and existing. I really missed talks about new ideas or possible security issues like the issues with Docker pull, but that is maybe more for a conference talk.
Check the meetup and the agenda here:
http://www.meetup.com/Docker-London/events/218940249/

And thanks to the organizers for creating the event! I really recommend it!


Tuesday, 25 November 2014

Creating a graph of software technologies with Docker, Neo4j, NodeJS and D3.js Part 1

Feedback highly appreciated


Okey! So what would I like to do?


I would like to experiment with some technologies, which were in my mind nowadays, and have fun. I might later swap some of the pieces (maybe Node to JVM platform or to another client side JS framework). The idea is to visualize the connections between software technologies using this stack.

The imagined problem and solution is the following. (It might turn up to be a good thing, but maybe it will just remain as a prototype.)

Usually we are interested in particular technologies and their relations to others so we start googling. We do this because we know what we want to achieve, but we don't know what tools do we have or we know some of the tools, but we want to see if there are comparable alternatives. We select usually by checking if the technology is free or not, is it mature enough for our purpose, is there a community behind it or maybe a known company, is it maintained frequently and we also sometimes need to predict for how long this technology will be around before a new comes. 

To visualize the technologies I would like to create a web application.

  • It will run in a browser so I need some JavaScript libs for sure. I always wanted to try out the powerful D3.js
  • On the server side I will go with Node.js for now as a middleware  (might swap to JVM platform later). It will provide an API for the client application. 
  • The data, as the whole model is basically a graph, will be stored in a graph database, for now, let's go with Neo4j.
  • I would like to put the server side pieces in lightweight containers. Docker will be perfect for this.
  • I will need to manage the containers and I don't want to do this manually, so I might use Fig or Flocker or Kubernetes or all :).

Let's start and see what happens :)


What types of connections we should consider?


Implements

It means that A technology (e.g. library, framework) implements the B specification or protocol.

Uses

This connection says that A technology uses the B. This is a transitive relation.

Extends

A technology extends B technology. This is a transitive relation.

Relates to

A technology is related to B technology. This is a symmetric relation.

A is an alternative of B

A technology is created for a similar purpose like B. This is a symmetric relation.

Later we can consider the inverted connections like contains, specifies etc. We have to do this carefully as it can speed up our queries because of more specific types, but it can also slow down in some cases. It really depends on the use cases and size. For now this is enough. To learn more about this, the Graph Databases book is a great reference.

We can model these relationships with a property model graph.


What do we need?

(Note: Installed Docker is required and some Linux distribution. Windows and Mac users have to use boot2docker and set up the port forwarding for their boot2docker controlled VM. Another sibling for Windows users is to use Spoon but that is not a Docker based platform.)
Lets build our stack from down to top.

We need a running Neo4j instance. Why not run it in a Docker container? We could move our instance whenever we want or reuse the image for staging environments, integration tests, or for adding new nodes. Unfortunately there is no official Neo4j Docker image on Docker Hub, but there is one which is quite popular created by tpires. Could be a good fit, but first let's check it's Dockerfile to see how the image was built.

It is based on dockerfile/java which is based on an ubuntu image.

Looks good!

To just try it if it works (and to get the dependencies) let's run what the author says:
docker run -i -t -d --name neo4j --privileged -p 7474:7474 tpires/neo4j
view raw gistfile1.sh hosted with ❤ by GitHub

We are saying here, hey Docker run tpires/neo4j image in a container please, and bind the host machine's 7474 port to this container's 7474 port. 
ogi@ubuntu:~$ sudo docker run -i -t -d --name neo4j --privileged -p 7474:7474 tpires/neo4j
Unable to find image 'tpires/neo4j' locally
Pulling repository tpires/neo4j
bc1c23b28916: Pulling dependent layers
511136ea3c5a: Download complete
d497ad3926c8: Download complete
bc1c23b28916: Download complete
e791be0477f2: Download complete
3680052c0f5c: Download complete
22093c35d77b: Download complete
5506de2b643b: Download complete
b08854b89605: Download complete
d0ca2a3c0233: Download complete
1716e82f74f0: Download complete
b41d25703535: Download complete
e95dbc5735e1: Download complete
5992007b07de: Download complete
b4e54ddfb2af: Download complete
cb875b6a5e56: Download complete
ea9d3f0791a1: Download complete
ad4d64683ae2: Download complete
1e40114b530a: Download complete
78bda9302d72: Download complete
1c2b68432f4e: Download complete
33e130bf1c86: Download complete
dabafd1110de: Download complete
d35b10c1f6c2: Download complete
5b83638ca8f8: Download complete
c3e91297793d: Download complete
Status: Downloaded newer image for tpires/neo4j:latest
ad63739868ec29d7fd820517e9b69f6743eb9f552d41c1b6184a85eb2e1ce927

Great. Let's check if its running.

ogi@ubuntu:~$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ad63739868ec tpires/neo4j:latest "/bin/bash -c /launc 4 days ago Up 4 days 1337/tcp, 0.0.0.0:7474->7474/tcp neo4j
view raw neo4j_docker_ps hosted with ❤ by GitHub
Yup!

Neo4j runs a webserver for us, so let's open our browser and type localhost:7474.


It works!

It would be good to separate the data from the functionality into two containers, not to loose the portability, so let's create a data-only container.
Create a Dockerfile.
ogi@ubuntu:~$ touch Dockerfile

Let's reuse ubuntu image and add a volume.
FROM ubuntu
VOLUME /var/lib/neo4j/data
CMD ["true"]
view raw Neo4jData hosted with ❤ by GitHub

Build the image.
ogi@ubuntu:~$ sudo docker build .
Sending build context to Docker daemon 116.6 MB
Sending build context to Docker daemon
Step 0 : FROM ubuntu
---> 5506de2b643b
Step 1 : VOLUME /var/lib/neo4j/data
---> Using cache
---> 2ba1b980567c
Step 2 : CMD true
---> Using cache
---> 4b6c62e5e10c
Successfully built 4b6c62e5e10c

Run the image.
ogi@ubuntu:~$ sudo docker run --name neo4j-data 4b6
view raw runDockerVolume hosted with ❤ by GitHub

Bind the volume to our Neo4j container and run it. (Don't forget to stop it.)
sudo docker run -i -t -d --name neo4j --volumes-from neo4j-data --privileged -p 7474:7474 tpires/neo4j

Nice. 
Ok, so now we have a running database. Of course it is not production ready, but for this prototype it is enough.

Next we will add a middleware based on Node platform, which will call the Neo4j's REST API and add some business logic. We will do this in the next part.
(Note: If we would want, we could extend the Neo4j's REST API with writing extensions in Java using JAX-RS annotations.) 

Feedback highly appreciated