I came across SparkJava yesterday. It is a really interesting micro web framework especially for Java where nothing really is micro and something like this is badly needed. Their website does a good job of explaining how it works and get started. I was up and running in minutes.
I build demos and proof of concepts as part of my job and always in the lookout for efficient and easier way to do things. The RESTful example caught my eye. I decided to extend this and show how you can expose data stored in GigaSpaces easily.
I created a maven web project using maven archetype:generate goal,
mvn -DgroupId=com.gigaspaces.spark -DartifactId=gs-spark -DarchetypeArtifactId=maven-archetype-webapp
I took the RESTful code mentioned above. As I want to run the RESTful service in GigaSpaces I changed the code to implement SparkApplication interface and define the Routes in the init() method. I also created the web.xml as suggested here
I modified the CRUD operations to go against a GigaSpace instead of a Map. For keeping it simple, I am creating GigaSpace inside the RESTful service/web container but in real applications you will be connecting to an external GigaSpace cluster which can be changed by updating the GigaSpace URL.
Source code is in the github repo here. README file has instructions on how to run this on your side.
As you can see, SparkJava seems to be a promising framework. Very simple configuration and code needed for building web apps.
Hope this helps others looking for building simple web apps in Java.
Couchbase Cloudify recipe was released couple of weeks back and more information about this recipe can be found here.
I had the opportunity to use the Cloudify Couchbase recipe for one of the POCs I was working on. My POC needed to use the XDCR feature which Couchbase introduced as part of the 2.0 release but the Cloudify recipe did not have support for this.
Luckily Cloudify recipes are easily extendable. Cloudify Custom commands let you reuse existing recipes and introduce new behavior that meets your needs. I decided to create a new custom command to Couchbase recipe for enabling XDCR.
Couchbase blog wrote about how to XDCR enable the clusters in detail here. I needed to automate these steps as part of the custom command, so I had to use the REST API for creating a destination cluster reference and creating XDCR replications for the appropriate buckets.
XDCR Custom command supports all the appropriate parameters which include, localBucketName remoteClusterRefName remoteClusterNode1 remoteClusterPort remoteClusterUser remoteClusterPassword remoteBucketName replicationType
When I tried this on my test clusters on ec2 I saw errors that the cluster cannot find the other cluster member(s). After further research and speaking to Couchbase guys, I realized that I had to use the public DNS name when configuring the cluster and pass public DNS name for enabling XDCR.
On an ec2 instance this was pretty easy to do. I used the “wget -q -O - http://169.254.169.254/latest/meta-data/public-hostname" to get public DNS name within the scripts and configured the cluster.
Updated recipe with support for XDCR is on my github fork here.
Example custom command to enable XDCR with apac-cluster node “10.10.10.10” is,
invoke couchbase xdcr appBucket apac-cluster 10.10.10.10 8091 admin mypassword appBucket continuous
As you can see in my case, it is fairly easy to extend existing Cloudify recipes and tailor it to your needs.
Hope you found this information useful.