Search This Blog

Tuesday, December 16, 2008

Unable to RIP, my foundations are shaky..

"Oh No! Not REST again. Can't you post something else? Don't you see that you are blogging about something that is only a blip in the CS Continuum which will soon be rendered immaterial?For heaven sake, there are more important things to write about!..."

That my friends, was my split SOAP personality coming forth for a few brief moments ;-)..Down boy, down! This ain't one of those bashing blogs, its got nothing to do with you, it's more of a house keeping blog. I need to get my thoughts in order. We shall have our little death match soon, I promise..By the way, this blog has more to do with HTTP than REST.

As I work with REST and HTTP, there are some fundamental concepts that I need to absorb and document for my reference or anyone else who might find it useful. For anyone getting into the REST web services, one book is a must read, RESTful Web Services by Richardson and Ruby.Most of this blog is summarizing and discussing what is written in the book in addition to my 2c.

Some Definitions:

Side Effects in Programming:
A side effect free operation is one which when invoked, should not result in the "hidden" and "unexpected" change of some other state. Maybe a better definition can be obtained from Wiki Pedia. From a broad case, calling a method add(int a, int b), should not blow up my system ;-) or take money out of someone's bank account and credit it to mine :-)))

Idempotency:

An operation is said to be idempotent, when multiple invocation of the same operation by the same or different consumers, results in the leaving the operation in the same state always. When working with Resources, quoting the Web Services book, "An operation on a resource is idempotent, if making one request is the same as making a series of identical requests. The second and subsequent requests leave the resource state in exactly the same state as the first request."

Http Methods that I am unclear about:

1. GET:

GET method is used to retrieve whatever information is available at a particular Request URI. The W3C documentation states "If the Request-URI refers to a data-producing process, it is the produced data which shall be returned as the entity in the response and not the source text of the process, unless that text happens to be the output of the process. "

A GET request is issued to obtain information. It is not meant to change the any state on the server. "No Change" seems to be the buzz word. Safety is of the utmost importance. If one executes, "/orders/order/23", 0 or more times, it should be the same every time. Executing the request, should not result in the consumer concerned regarding the state of the resource. Now, "same" does not mean that the result obtained is the "same" across all the calls over time. For example, it is possible that during the 1st request made, the order with number 23 was non-existent, when the 2nd request was made, the order was returned (maybe a PUT occured that created the resource), when the third request was made the same order was returned but had different content (maybe due to a PUT that changed the resource).

What is the "idem potency" part here? Are we leaving the resource in the same state across multiple requests?

What about cases like "HIT-Counters", should these resources be a GET operation? The Web Services book seems to say that GET requests CAN have side effects and states that HIT counters are a candidate for GET operation. If we go back to the W3C definition of GET, it states that if the GET operation invokes a URI that is a data producing process, the data will be returned as part of the response. For example, on every GET request, we might be changing server state due to some logging occurring. In the case of HIT counters, or lets say a service that generates UUID's, via, GET "/uuidgenerator", state is being changed from request to request. Side effects are occurring, and majorly so. So what's all this stuff we stated above regarding GET's not changing server state that is visible and creating a substantial impact? Isn't a call to increment the HIT counter really a "Increment and get the next value?"

The authors of the Web Services book state "A client should never make a GET or HEAD request just for the side effects, and the side effects should never be so big that the client might wish it hadn't made the request". When a client invokes a GET on a hit counter, if the client does not expect it to have a side effect of incrementing the counter, what is the point in making the request? I understand if the call was a GET "/hitCounter/currentCount", then the request is not changing the count. However, auto incremented GETs is definitely a major side effect IMO. The same would apply to the UUID which would create a new one on every call. This BLOG has an interesting discussion on GET and idempotency.

A document from the W3C attempts to address when it is appropriate to use GET. Again, I do not feel the document finds me my answer.

I am rather torn as GET for a UUID seems so natural, "Get me a new UUID". I am also unsatisfied with explanations from the Web Services book. However, in light of all the above, my take on items like hit counters, uuid generators etc, are better handled via a "POST" operation. We are changing the state of the resource definitely and with "INTENT" of doing so from the Client's perspective, so is it not better better to use a POST and consume the response? Am I totally wrong. here...?

2. PUT:

A PUT operation is typically used to create or update a resource. From the W3C, "The PUT method requests that the enclosed entity be stored under the supplied Request-URI. If the Request-URI refers to an already existing resource, the enclosed entity SHOULD be considered as a modified version of the one residing on the origin server. If the Request-URI does not point to an existing resource, and that URI is capable of being defined as a new resource by the requesting user agent, the origin server can create the resource with that URI"

What the above tells me is that use PUT to create a resource that can subsequently queried with a GET request to the created resource URI.

For example, a PUT to "/orders/order/23", when there is no resource at the URI, the PUT operation results in the creation of an Order with Id = 23. Invoking PUT to "/orders/order/23" when an order previously existed at the resource, equates to updating the order. After either of the above PUT calls, a call of GET "/order/order/23" will obtain a resource, i.e., the Order with Id=23.

From the above, if creating a new resource, I would only use PUT if I have all the information required to create/update the resource before the call is made such that a subsequent GET operation can be executed with the information I possessed to obtain the newly created Resource. I do not expect the Resource code present on the server at the URI to supply any additional data that then allow me to locate the resource based of the server provided data. What I create with, I should be able to GET.

PUT indicates a call where the client is in control of exactly where and how the Resource will be identified. Further more PUT is idempotent, multiple calls with the same information has the same result on the operation. So in the case of the order example, before creating the resource, I would ask a UUID service to obtain a unique identifier and then use that identifier to uniquely identify the resource I create. Note that in the case of the "Order" example, one is actually creating a "sub-resource" of the orders resource, so is PUT an acceptable operation to do the same? Yes, the book is a good read for details.

3. POST:

POST, IMO is one of the HTTP methods that is most flexible to use. The W3C documentation on POST states "The POST method is used to request that the origin server accept the entity enclosed in the request as a new subordinate of the resource identified by the Request-URI in the Request-Line". Another interesting line from the W3C documentation is "The action performed by the POST method might not result in a resource that can be identified by a URI."

From the above, POST should be used when creating a "Sub-Resource" of a resource. For example, a line item under a particular order such as POST "/order/23/lineItems/" . In other words POST is used to create a Resource without priorly knowing exactly where the Resource will be available upon creation. After the call to POST, the newly created line item resource might be available at "/order/23/lineItems/lineItem/2" or at "/order/lineItems/lineItem/3" or at some other identifier. The client that made the call to POST the data has no way of knowing the URI where the Line Item can be obtained from, prior to performing the call.

POST, IMO, is a great candidate when creating a resource whose Id will be generated when the resource is created, for example, when creating a line item that is a sub resource of the order.

What about using using POST for NOT creating a Resource that can be identified by a URI? This is total room for RPC style of programming. A Resource can be used as a RPC style processor by interrogating the request and performing different operations based of the same. The Web Services books terms the same as Overloaded POST. Uses of POST for "myresource?method=save" or "/myresource?method=archive" are uses of POST that are more RPC oriented.

Using POST for something like, "/calculateCost", IMO is a valid use of POST where a request is submitted and the response provides the result. As mentioned before, I am of the opinion that POST is the better candidate for HIT Counters, Id Generators etc.

One problem that plagues URI's is the allowed length. Although the HTTP Standard does not define a limit on URI length, clients and servers do. I like the example from the Web Services book, a GET "/numbers/11111..........", represents a problem. However, performing a POST on a resource by specifying a "method" seems a reasonable way to overcome this problem, ex: POST "/numbers?method=GET" where the number "11111....", a very long number, is in the body of the POST.

In other words, is it OK to bend the rules of REST in cases where URI length is a concern? This seems to me an architectural and design compromise one needs to suffer when URI length breaks the underlying system. Overloaded POST needs to be used judiciously. If URI length is NOT a concern, I recommend not having to use overloaded POSTs at all.

What about algorithms or singular methods available at a resource? Is POST the right HTTP method for the same?

When to use Query Variables?

One finds cases where representing Resource via fully qualified paths sometimes feels rather verbose. Do I need to create a resource for every path possible? Scoping a Resource sometimes does not sound right, in some other cases, it is just painful when very deep :-).

The authors of the Web Services book prefer to avoid query variables where possible. Quoting the authors, "..including them (query vars) in the URI is a good way to make sure that URI gets ignored by tools like proxies, caches and web crawlers". The same are great arguments, but making Resource URI's of resources that will not necessarily be used from one call to another, is pretty steep. The authors especially acknowledge the value of Query variables when they apply to searches or what they generalize as "algorithms".

I totally agree with the authors regarding the appropriate use of Query variables when a search is involved. Rather than having an individual URI path for every possible criteria and sub-criteria, the use of query variables are more apt for the problem.

Consider the Yahoo API, http://search.yahooapis.com/WebSearchService/V1/webSearch?appid=YahooDemo&query=finances&format=pdf

Note the use of the "webSearch" at the end of the URI prior to the Query Variables that follow. It is my opinion that the above is a great example of how a URI for search should be developed.

So in the same light, "/orders/search?createDate=20081127&containsItem=XBox.." is a great use of Query Variables.

Conclusion:

I hope I have understood the "basics" properly. If not, as always, I would appreciate insight in the matter. I am back to re-reading the book. At the very best, I find that HTTP specifications are rather nebulous. My take on the methods and query variables:

  • Use PUT when you know that you can GET the Resource based of the information you have when you will be PUTting the Resource.
  • For algorithms such as HIT counters, use POST.
  • Use POST when creating a sub resource, especially when working with generated database identifer that will define your resource URI.
  • Query Variables are a great use when performing searches. In particular, let the resource where the search begins be denoted as such, i.e, ".../.../foo../bar../search?...". Or in other words, qualify till applicable before resorting to search.
  • Do not overload POST and make it an RPC call. SOAP personality, please relax..its not personal :-)

Finally,

  • "/orders" - POST to create a new order seems correct
  • "/orders/123" - PUT to update order 123 seems correct
  • "/orders/123" - DELETE seems correct to delete order 123
  • "/orders" - GET seems correct to get all orders.
  • "/orders/123" - GET seems correct to get order 123
Resources:

Tuesday, December 9, 2008

jvmti, jni - Absolute Power

Its a snowy day..I am sick, for those who jump to say "I've known that for years!", chill! :-) I am definitely under the weather and to add to the same, I have a snowy landscape to view. I am also suffering from really erratic sleep patterns. Sometimes I am unable to sleep until 5:45 a.m and other times where I wake up at 4:45 a.m. For one reason or another, I always land in state that has snow. Maybe it's destiny. Maybe one day when I move back to Bangalore, it will snow there as well, due to global warming or a nuclear winter, what have you. Immediate concerns, why could I not be in Florida? All ye Florida recruiters, ping me ;-) I apologize for the brief frustration release.

Anyway, that is enough for introductions. As always, I do not really know what to do with my precious time and instead choose to waste it. I have been interested in java class instrumentation and the esoteric world of java under the covers ever since I attended a presentation by Mr.Ted Neward. In particular, I have been wanting to play with JVMTI. Doing so meant, I would need to enter the world of C programming, something I have seem to have forgotten since working with java. So what am I trying to do? I am wishing, what if during developing unit tests, I could:
  • Force a Garbage collection of the JVM
  • Check to see if the objects I allocated are cleaned out
  • Determine what objects are on the heap
  • Determine what the state of the different threads on the VM are...
  • Determine what objects are reachable
  • And More...
Sure you can, just use a profiler like JProbe, YourKit or whatever. But how do they do it? JVMTI is the answer.

What is JVMTI? "The JVM tool interface (JVM TI) is a standard native API that allows for native libraries to capture events and control a Java Virtual Machine (JVM) for the Java platform"...That is the official statement, mine is "Power Baby, Power!". Read more about JVMTI, JVMPI and how agents work in this fantastic article by Kello O'Hair and Janice J.Heiss.

In particular, the folder JDK_HOME/demo/jvmti of your JDK has multiple demonstrations of JVMTI features. I spent quite sometime running the same and would recommend taking a look at the demo's for my fellow enthusiast.

So what am I looking for? What I would like to do is load a libary using JNI and use JVMTI to print debug information regarding my application state. In particular, I am looking to see whether or not my code cleans up after itself.

I have a class Foo that is rather plain and does the following. Note that the same could easily be replaced by a JUnit test:



public class Foo {
Bar b;

private static Bar BAR = new Bar();

public Foo() {
b = new Bar();
}

public static class Bar {}
public String sayHello() {

ProgramMonitor.dumpHeap();
return "Hello World";
}

public static void createFoo() {
ProgramMonitor.forceGC();

ProgramMonitor.dumpHeap();
new Foo().sayHello();

}

public static void main(String args[]) {

Foo.createFoo();
ProgramMonitor.forceGC();
ProgramMonitor.dumpHeap();

}
}



My code for the ProgramMonitor class is rather simple and uses JNI as shown below:



public class ProgramMonitor {
public static native int getNumberOfLoadedClasses();

public static native void dumpHeap();
public static native void forceGC();

static {
System.load(System.getProperty("jvmtilib"));

}
}


So what I am trying to accomplish. When an object of the "Foo" class is created, it results in the creation of the "Bar" class as well. In addition, when Foo.class is loaded, it creates a static reference to Bar as well. When the program is done with the "Foo" object that is instantiated, the Bar object should be gone, i.e., GCed. However, the static reference to Bar in the Foo class should still be available.

What if I could view this same happening and assert the same ?

As shown above, the ProgramMonitor.java invokes native methods. One can create a C header file from the class definition by executing the following command:

>javah -jni -classpath . ProgramMonitor

The above call results in the creation of a C header file called ProgramMonitor.h that looks like:

....
/*
* Class: ProgramMonitor
* Method: getNumberOfLoadedClasses
* Signature: ()I
*/

JNIEXPORT jint JNICALL Java_ProgramMonitor_getNumberOfLoadedClasses

(JNIEnv *, jclass);

/*
* Class: ProgramMonitor
* Method: dumpHeap
* Signature: ()V
*/

JNIEXPORT void JNICALL Java_ProgramMonitor_dumpHeap

(JNIEnv *, jclass);

/*
* Class: ProgramMonitor
* Method: forceGC
* Signature: ()V
*/

JNIEXPORT void JNICALL Java_ProgramMonitor_forceGC

(JNIEnv *, jclass);
.....



The above generated header file defines the JNI functions that one needs to implement. An C file that implements the JNI header functions is created, i.e., ProgramMonitor.c. Shown below are only some parts of the C file, ProgramMonitor.c:



...
#include "jni.h"
#include "jvmti.h"
#include "ProgramMonitor.h"

/* Check for JVMTI error */
#define CHECK_JVMTI_ERROR(err) \
checkJvmtiError(err, __FILE__, __LINE__)

static jvmtiEnv *jvmti;

.....
JNIEXPORT jint JNICALL JNI_OnLoad(JavaVM *vm, void *reserved) {

jint rc;
jvmtiError err;
jvmtiCapabilities capabilities;

jvmtiEventCallbacks callbacks;

/* Get JVMTI environment */
jvmti = NULL;

rc = (*vm)->GetEnv(vm, (void **)&jvmti, JVMTI_VERSION);

if (rc != JNI_OK) {
fprintf(stderr, "ERROR: Unable to create jvmtiEnv, GetEnv failed, error=%d\n", rc);

return -1;
}

CHECK_FOR_NULL(jvmti);

/* Get/Add JVMTI capabilities */
.....
/* Create the raw monitor */
err = (*jvmti)->CreateRawMonitor(jvmti, "agent lock", &(gdata->lock));

CHECK_JVMTI_ERROR(err);

/* Set callbacks and enable event notifications */
....
return JNI_VERSION_1_2;

}

JNIEXPORT jint JNICALL Java_ProgramMonitor_getNumberOfLoadedClasses(JNIEnv *env, jobject obj){

jclass *classes;
jint count;

(*jvmti)->GetLoadedClasses(jvmti, &count, &classes);


return count;
}

void dump() {

// Dump information....
.....
}

JNIEXPORT void JNICALL Java_ProgramMonitor_dumpHeap

(JNIEnv *env, jclass jclass) {
dump();

}

JNIEXPORT void JNICALL Java_ProgramMonitor_forceGC
(JNIEnv *env, jclass js) {

printf("Forcing GC...\n");
jvmtiError err = (*jvmti)->ForceGarbageCollection(jvmti);

CHECK_JVMTI_ERROR(err);
printf("Finished Forcing GC...\n");
}




The point to note from the above are that JNI_OnLoad method is called, a reference to the JVMTI environment is obtained and interest on jvmti capabilities are established. Note the forceGC call.

Now that we have the implementation of the library, we can build the same. The resulting library is called libProgramMonitor.so. So what we have now is a C library that obtains a handle to JVMTI and provides for methods to force garbage collection and provide information on the heap at any given time.

We are now ready to execute our Foo class and witness the output.



>java -Djvmtilib=/home/sacharya/jvmti-examples/libProgramMonitor.so -classpath . Foo
Forcing GC...
Finished Forcing GC...
Number of loaded classes 353
Heap View, Total of 35688 objects found.

Space Count Class Signature
---------- ---------- ----------------------
8 1 LFoo$Bar;
---------- ---------- ----------------------

Number of loaded classes 353
Heap View, Total of 35690 objects found.

Space Count Class Signature
---------- ---------- ----------------------
16 2 LFoo$Bar;
16 1 LFoo;
---------- ---------- ----------------------

Forcing GC...
Finished Forcing GC...
Number of loaded classes 353
Heap View, Total of 35679 objects found.

Space Count Class Signature
---------- ---------- ----------------------
8 1 LFoo$Bar;
---------- ---------- ----------------------



From the above, notice that the instance of Bar that was transiently created was reclaimed. The static reference to Bar however lingered as expected.

Conclusion:
We can easily add more methods to the ProgramMonitor class to provide information such as References, Threads etc. The ProgramMonitor library is not displaying all the loaded classes and is filtering out the ones that begin with "java" or "sun".

Using JVMTI can be so valuable in validating code and ensuring it behaves as expected at the Unit test level. I am aware that there are commercial software that do the same :-)...You can't blame me for playing ;-). JVMTI is powerful stuff and I am only feeling the temperature of the water here. I don't want to enter the "C" though ;-)! If Linux is for geeks, then so is C. I have reached the conclusion that Java is equivalent of Windows OS for the C programmer.

I am not quite sure whether the "ForceGarbageCollection" is indeed a gurantee of Garbage collection. I am curious regarding promotion of objects across different GC spaces and how the ratio effects the code.

Source:
The code shown above was developed on JDK1.16.X and run on a Linux OS. Easily made compatible though by looking at the examples in the standard jdk demo. In addition, the majority of the code is based of the heapViewer demo code. This example is only an "example".

As always, my source can be obtained from HERE

Running the Example:
Enusre you have JDK 1.6 installed and you are on Linux OS. Export JDK_HOME to your jdk home. Run javac to compile your sources. Run the makefile by typing "make" to build "libProgramMonitor.so". Finally run ">java -Djvmtilib=/home/sacharya/jvmti-examples/libProgramMonitor.so -classpath . Foo" replacing jvmtlib value with the location of the libProgramMontior file in your file system. I know I should have made a maven project, I should have also had the .java file compiled from the make file. Oh well! In addition, why couldn't I have used System.loadLibary() to load the JNI library file? I couldn't, as it didn't work even though I have LD_LIBRARY_PATH defined correctly and am too lazy to figure out why ;-) Also, a better test would have been have a more busy case where the CPU is really occupied to view the GC.

Ping me if you cannot run this example. I have tried the same on Suse 11 and Mandriva Spring.

Resources:
JVM Tool Interface (JVMTI): How VM Agents Work
Java Forum inspiration
Garbage Collection Forcing documentation
Heap Analyzer Tool - Worth checking out
Creating and Debugging a Profiling Agent with JVMTI

Tuesday, November 25, 2008

Interviews - A chance at playing god.

There are a few times where one gets a chance to directly influence the life of another individual substantially. If one is a judge, the statement is considered void and null, as it happens all the time. I am talking about a scenario where an interviewer that interviews a candidate and whose opinion and decision determine whether the candidate is hired or not.

I do interviews some times. I must however admit that I do not feel this as a moment of power by any means. I tend to put myself in an interviewing candidate shoes, here is a person who is either unemployed and desperately trying to find a job or a person who has let go a day of vacationing in the Bahama's in search of a better position for them and their families and I am tasked with providing my extremely valued decision of "Yes or No". As an interviewer, one must do proper justice to the interviewing process as one owes that to the organization on behalf of which they are conducting the interview, while still respecting the individual being interviewed and providing their best judgement.


That said, let me narrate a factual interview that occurred:

Candidate is seated awaiting the interviewer to come pick him up. The candidate is right outside the office of the interviewer. A lady storms out, another candidate interviewing for the same position, tears in her eyes. Clearly the interview did not go as expected. But tears in her eyes, that's quite disconcerting to say the least for our candidate.


Following suit, emerges the interviewer, stern faced, a no-nonsense type of character and beckons the candidate to enter. Candidate is already a bit disturbed by the plight of the previous candidate. The candidate now follows the interviewer into his room and takes a seat. The questioning process starts. Some standard questions and introduction are exchanged, no pleasantries, just formalities, before the meat of the interview starts.

Interviewer: I hope you realize, your CEO is a moron, he has done absolutely nothing for the company since he started.
Candiate: I must disagree with your assessment. It is my understanding that he has achieved all the goals that he set out to do and has in fact increased the revenue of our company by 20% during his tenure.
Interviewer: I still think he is a moron, anyway, lets go on. So tell me why should I hire you ?
Candidate: I have the relevant experience and skill required of this job, I am personable and have the confidence to execute the duties of this position well, I have good contacts, I..
Interviewer (interrupting): Your previous experience is of no value in this job as its a whole new environment. Regarding personality and confidence, my 10 year old son has the same attributes that you mentioned. Maybe I should take him for the position instead?
Candidate: That is your decision to make but I must hold my stance that he lacks the necessary experience that I possess.
Interviewer: I am not convinced, anyway, lets move on. So do you think you look good?
Candidate: I am no Tom Cruise but at the same time I consider myself presentable.
Interviewer: I am of the opinion that you look far better than Tom Cruise.
Candidate: I am flattered by your assessment and am glad that someone feel that way as my wife certainly does not (candidate trying to add some humor here).
Interviewer: If that is the case, I feel that you married the wrong person and you and your wife argue and fight a lot!
Candidate: This is the only topic we really ever fight about (still maintaining composure)
Interviewer: I am not yet convinced and still maintain that you are far better looking than Tom Cruise.
Candidate (Smiling): Thank you once again. If you would be so kind as to mention the same to the wife, it would eliminate our one reason for arguing.
Interviewer (Looking around the room): OK lets move on now. Give me 10 uses of a coat hanger within the next minute and a half.
Candidate (Answering with rapid fire): It can be used to hang a coat, draw a triangle, a weapon, remove cob webs........
Interviewer: OK, now tell me three good reasons to hire you as I am still not convinced.
Candiate: I have a lot of experience, I have excellent contacts, I have the drive and enthusiasm.
Interviewer: This is absolutely no good, everyone says the same thing. You have not even provided one good reason as to why I should hire you.
Candidate (At this point irritated): I have given you the reason why I believe that I am a fit for this organization. If you remain unconvinced, then, you are definitely entitled to your view. I, however have nothing else to add.
....
Some final statements and the interview ends. The candidate is taken outside to a cab by the interviewer. No smiles, no chilling after the interview, no return to earth, no small talk, just a goodbye.

So what happened? Do you think the candidate got an offer? You bet he did! Did he take it? Read on to find out...

Analyzing the interview, why was it so aggresive? This is clearly not a regular interview by any means. The interview was for a very senior position involving sales. The interviewer was apparently simulating a typical sales environment where he acted like a tough customer. The environment was to test how the candidate would perform and sell. It was a test to see how thick skinned the candidate is, how well he handles pressure, how composed he is and most importantly how he can defend his position and make a sale.

So, where does that leave the candidate? Does the candidate take the position? The candidate did not. The candidate although uncomfortable with the line of questioning, did understand the direction of the leading questions. However, the problem with the interviewer was his continued aggressive behavior after the conclusion of the interview. There was absolutely no way the candidate wanted to work with an individual with a personality like the interviewer. So the candidate turned down the offer and found a far more lucrative and suitable position in a different organization. The point of note here is that the candidate did not fail! The interviewer failed! He failed his organization as an interviewer as the deciding factor of whether the candidate accepted the job or not was really determined by the behavior of the interviewer. The interviewer lost a really good talent and the same could be equated to a $$$ of loss of the organization as they lost a really good sales person.

I think for a moment, as to how I would have reacted to the line of questioning. I must admit that I have a very transparent face and a far lower tolerance level. I would have been red within the first few levels of questioning and would have stormed out of the office after hurling some really choicest words at the interviewer. Would I have got the job, errr I doubt it :-).
I don't think I have the skin to be in sales. More importantly, I do not think I can tolerate people who would make others cry during an interview process. I do not believe that any individual, interviewing for a sales position or any other should have to undergo such a tortuous line of questioning. I also believe that as an interviewer or as a candidate, one should stick away from any topics that are personal in nature, such as marriage, family etc. Apart from being just plain territories that should not be charted, they represent food for law suits as well.

So, as a person conducting the interview, we have the power play, our opinion will count, regardless of whether it is a sole decision or is a collaborative one after discussing with other "gods". We can choose to intimidate, befriend or tread a path in between while interviewing the candidate. As an interviewer, one should control the interview session but not come across as rude. It is a balance. Remember that as an interviewer, one is representing not just one self but one's organization as well. What the interviewer portrays will be the impression of the organization for the candidate. From the perspective of the candidate, the keyword has to be "impress". Impress by personality or skill or both. As an interviewer on can easily judge the latter but the former is often a grey area. At best, I would rate gauging a candidate as a dark art. Remember that the candidate is assessing the interviewer and organization as well....Its a two way sale!

I must state one thing! I work at Overstock.com and the interviewing process here is so tangential from that mentioned above. Candidates who come to Overstock for interviews are treated with the utmost respect and hospitality. I joined the organization after all :-))))

Saturday, November 22, 2008

Home town of the Boss, jax-rs, jersey, spring, maven

I have previously tried jax-rs implementations by Restlet and JBoss RESTEasy. You can find the following same at :


One implementation that I had been postponing was Sun's RI, i.e., jersey. Trying to save 'hopefully' the best for last ;-). The name of the implementation has a part of the Boss's town of birth after all! Born in the USA! I am not born in the USA, but love it as much as my own country and is my home away from home! Moving on...

As before, I tweaked the simple Order Web Service example to use jersey.
  • Support for a Client API to communicate with Rest Resources.
  • Very Easy Spring integration.
  • Sun's RI, i.e., from the source
  • Support for exceptions
  • Very good support for JSON Media Type
  • Maven
  • Good set of examples
  • Automatic WADL generator
  • IOC
  • Embedded deployment using Grizzly
  • Filters on client and server side
  • Utilities for working with URI
One of the things that has impressed me about jersey is their out of the box JSON support. Being able to support JSON format without having to create a new javax.ws.rs.ext.Provider is rather convenient. By default the JSON convention is JSONJAXBContext.JSON_NOTATION. One can quite easily change the same to use Jettison or Badgerfish convention.

I was easily able to enable JSON representation for my Product resource by defining the Product data transfer objects with JAXB annotations, adding a @Produces("application/json") in the ProuductsResource class and ensuring that I have the jersey-json jar in my build.



ProductDTO.java
@XmlType(name = "product")

@XmlRootElement(name = "product")
public class ProductDTO implements Serializable {

....
}

ProductListDTO.java
@XmlRootElement(name = "productList")

@XmlAccessorType(XmlAccessType.FIELD)
@XmlType(name = "products", propOrder = {"productDTOs"})

public class ProductListDTO implements Iterable<ProductDTO> {
....
}

ProductsResource.java
@GET @Produces("application/json")
public ProductListDTO getProducts() {

...
}






<dependency>
<groupId>com.sun.jersey</groupId>
<artifactId>jersey-json</artifactId>

<version>${jersey-version}</version>
</dependency>




There is an excellent tutorial, Configuring JSON for RESTful Web Services in Jersey 1.0 by Jakub Podlesak that you can read more about.

To support Spring integration, the web applications deployment descriptor has been modified to use the Jersey Spring Servlet. All the Spring managed beans defined by the @Component, @Service, @Resource annotations are automatically wired.



<context-param>
<param-name>contextConfigLocation</param-name>
<param-value>classpath:applicationContext.xml</param-value>

</context-param>

<listener>
<listener-class>org.springframework.web.context.ContextLoaderListener
</listener-class>
</listener>

<servlet>
<servlet-name>JerseySpring</servlet-name>

<servlet-class>com.sun.jersey.spi.spring.container.servlet.SpringServlet
</servlet-class>
<load-on-startup>1</load-on-startup>
</servlet>





On the Client Side, I was pleasantly surprised by the ease in which I could invoke REST calls. As mentioned in my RESTEasy blog, jax-rs has no requirements for a client side specifications. However, those implementations that do provide one will be the ones to gain more adoption IMO. The jersey implementation provides a DSL like implementation to talk to the services. Very "VERBY", if there is no word in the dictionary like "VERBY", I stake my claim on the same :-). I modified the client from the RESTEasy implementation of the Order Service to use Jersey Client support as follows :



public class OrderClientImpl implements OrderClient {

private final WebResource resource;

/**
* @param uri Server Uri
*/

public OrderClientImpl(String uri) {

ClientConfig cc = new DefaultClientConfig();
// Include the properties provider

Client delegate = Client.create(cc);
// Note that the Resource has been created here

resource = delegate.resource(uri).path("order");

}

public OrderDTO createOrder(OrderDTO orderDTO) throws IOException {

return resource.accept(MediaType.APPLICATION_XML).type(MediaType.APPLICATION_XML)

.post(OrderDTO.class, orderDTO);
}

public void deleteOrder(Long orderId) throws OrderException {

resource.path(orderId.toString()).type(MediaType.APPLICATION_XML).delete();
}

public OrderDTO getOrder(Long orderId) throws OrderNotFoundException, IOException {

try {
return resource.path(orderId.toString())

.type("application/xml").accept("application/xml").get(OrderDTO.class);

} catch (UniformInterfaceException e) {

if (e.getResponse().getStatus() == Status.NOT_FOUND.getStatusCode()) {

throw new OrderNotFoundException(e.getResponse().getEntity(String.class));

}
throw new RuntimeException(e);
}
}

public void updateOrder(OrderDTO orderDTO, Long id) {

resource.path(id.toString()).type("application/xml").put(orderDTO);

}
}





As you can see from the above the use of the Jersey client API is rather straight forward and intuitive. One point to note is that Jersey provides a Exception framework for easily handling common exception cases like 404 etc. There are some classes that enable this support, com.sun.jersey.api.NotFoundException and com.sun.jersey.api.WebApplicationException that one can use. As I did not want to tie my data transfer object maven project to jersey in particular, I did not use jersey exceptions but instead stuck with my custom Exception Provider.

Running the Example:
The Example can be downloaded from HERE.

This example has been developed using maven 2.0.9 and jdk1.6.X. Unzip the project using your favorite zip tool, and from the root level execute a "mvn install". Doing so will execute a complete build and run some integration tests. One interesting thing to try is to start the jetty container from the webapp project using "mvn jetty:run" and then access the jersey generated WADL from http://localhost:9090/SpringTest-webapp/application.wadl

Now that you have the WADL, you should be able to use tools like SOAPUI or poster-extension (a Firefox plugin) to test your RESTful services as well.

It would be interesting to see how wadl2java and the maven plugin provided there in can be used to create a separate client project to talk to the web services.

The jersey web site has pretty good documentation about using jax-rs. It is not thorough but getting there. There are a good set of examples that one can download and try as well. It is my understanding that NetBeans has good tooling support for jersey as well.

So now that I have tried Restlet, different jax-rs implementations and jax-ws what would be my direction if I were to take a decision on what to use for my SOA project? Some food for my next blog :-)

Again, if the example fails to run, ping me...Enjoy!

Monday, October 20, 2008

SOAPUI REST Support

Some time ago, I had blogged lamenting the absence of a nice testing tool for REST. At that time, a gentleman from the SOAP UI team mentioned the upcoming support for REST from Soap UI.

I could not help but try out the Beta release of the same. SOAP UI should probabaly change their name to WsUI (Web Service UI). These guys seem to be doing a pretty neat job. If their SOAP UI was any thing to go by, their REST support should turn out to be quite solid.

So I went to the SOAP UI site, downloaded the free version (SOAP UI 2.5 beta1) and installed the same. SOAP UI supports reverse engineering from WADLS quite like they do with WSDLs. An example of using WADL with SOAP UI is documented on their site.

In my current organization we do not as 'yet' use WADL, I wanted to test out the ease in which one could test REST calls using SOAP UI. Operations such as GET/POST etc.

A simple web service that services two resources, "/products" returning a list of JSON products and "/order", a resource that allows for POST/GET/PUT and DELETE of Orders.

When the SOAP UI application opens, clicking on "New SOAPUI project" brings forth a dialog where in one can either choose use specify a WADL or provides the Option for a REST Service as shown below:







Clicking OK on the above, brings forth a dialog that allows one to create a Service as shown below:





Note the end point supplied points to the Web app. In addition, select the check box to create a Rest Resource before clicking OK to bring forth the Rest Resource Dialog:



After accepting the above dialog, proceed to the Dialog shown below to be able to execute a call to the web service to obtain products.



In the above dialog although I specified application/json as the media type, the JSON tab on return of JSON content does not display the same. However, in the RAW tab, one can view the JSON content.




The same process above can be followed in order to create an order resource with different types of HTTP operations supported by the resource.

Pretty neat..makes testing REST Web Services rather easy. In particular this looks like a boon to the Tester as well as they now have an easy way to test out Rest services. One can also set up load tests, intercept request/response using tcp mon and or pull SOAP UI into eclipse as a plugin.

I have attached herewith a Maven project that represents a simple web service. There are two resources provided, "/products" and "/order". The former returns a JSON representation of products. On the latter resource, issuing a GET via "/order/1" will return back an Order. You can create an order easily by using POST. The xml for the same is:

<?xml version="1.0" encoding="UTF-8" standalone="yes">
<order>
<lineItem>
<itemId>12123</itemId>
<itemName>XBOX 360</itemName>
<quantity>2</quantity>
</lineItem>
</order>


A sample SOAPUI project for the same can be obtained from HERE. Import the same into SOAP UI. Execute "mvn jetty:run" on the Maven project to start the webapp. Subsequently execute each resource call using SOAP UI, Create Load Tests, Create a WADL, Enable TCP monitoring, Enable JAXB support, WADL.....Rock on! :-)))

Saturday, September 27, 2008

Hibernate Shards, Maven a Simple Example

Sharding:
In my career as a Java Developer, I have encountered cases where data of a particular type is split across multiple database instances having the same structure but segregated by some logical criteria. For example, if we are in a health organization, it is possible that member (you and me, patients) data for one health plan A is in one database while member data for health plan B is in another database.
There could be many reasons for the above. It is possible that there is just too much data for a single database, possible that health plan A just does not want their data mixed with health plan B or maybe the network latency for Health Plan B to store the data in the same database might be high.
Sharding can be thought of segregating and partitioning the data across multiple databases driven by functional or non-functional forces.
When a data consumer is talking to an architecture where the data is sharded, one typically experiences one of the following cases:
  1. The application at any time requires interaction with a single shard only.For example, in a health plan application that is supporting Health Plan A and Health Plan B, a query to find members will always be restricted to a particular plan. In such a case a simple routing logic can assist in directing the query to the particular database. I did have a similar challenge in a previous engagement, my favorite framework Spring and the Routing DataSource helped solve the problem. The Blog by Mr.Mark Fisher was the direction that I followed.
  2. The application needs to interact with data that is part of both databases. For example, to obtain data about members stored in Health Plan A or Health Plan B that match a particular criteria. This requirement is more complicated as one needs to obtain result sets from both databases.
Hibernate Shards is a project that facilitates working with database base architectures that are sharded by providing a single transparent API to the horizontal data set. Hibernate shards was created by a group of Google Engineers that have since open sourced their efforts with their goal to reach a GA soon with assistance from the open source community. More documentation regarding the architecture, concepts and design can be read from the Hibernate Shards site.
The Requirements: We are of to the world cup of soccer. Every country playing in the game maintains a database of its players. The application we would like to develop would like to query across these different databases. We need to develop an application for FIFA that will allow each country to create and maintain players and obtain information about other players from other countries as well. This database instance is provided for each country but the country does not have direct access to be able to insert data or query data against it but instead must use the FIFA application for all operations.
The Example: Denoting a Player in the space of the application, we define a POJO called NationalPlayer. The NationalPlayer has some interesting attributes,
  • First and Last Name of the Player
  • Maximum Career Goals scored by the player
  • His individual ranking in the world as a player
  • The country he plays for
The Country that the player plays for indicates the shard database that his data resides on.
@Entity
@Table (name="NATIONAL_PLAYER")
public class NationalPlayer {
  @Id @GeneratedValue(generator="PlayerIdGenerator")
  @GenericGenerator(name="PlayerIdGenerator", strategy="org.hibernate.shards.id.ShardedUUIDGenerator")
  @Column(name="PLAYER_ID")
  private BigInteger id;

  @Column (name="FIRST_NAME")
  private String firstName;

  @Column (name="LAST_NAME")
  private String lastName;
  
  @Column (name="CAREER_GOALS")
  private int careerGoals;
  
  @Column (name="WORLD_RANKING")
  private int worldRanking;

  public Country getCountry() { return country; }
  ....  
  public NationalPlayer withCountry(Country country) {
    this.country = country;
    return this;
  }
  ......
  @Column(name= "COUNTRY", columnDefinition="integer", nullable = true)
  @Type(
      type = "com.welflex.hibernate.GenericEnumUserType",
      parameters = {

              @Parameter(
                  name  = "enumClass",                      
                  value = "com.welflex.model.Country"),
              @Parameter(
                  name  = "identifierMethod",
                  value = "toInt"),
              @Parameter(

                  name  = "valueOfMethod",
                  value = "fromInt")
              }

  ) 
  private Country country;
  ...
  ....
}
We define a Simple Java 5 enum type denoting the different countries that are participating in the World Cup. Sadly, we have only 3, India, Usa and Italy for the 2020 world cup.
public enum Country {
  INDIA (0), USA(1), ITALY(2);
  
  private int code;
  
  Country(int code) {
    this.code = code;
  }

  public int toInt() {
    return code;
  }
  ...
}
The application uses three sharded databases for the three countries participating in the database and each database is defined using hibernate configuration file, hibernate0.cfg.xml for India, hibernate1.cfg.xml for USA and hibernate2.cfg.xml for Italy. To keep the example easy, we have decided that Country code's defined in the Enum map to the hibernate configs. For example, India is Country 0 as denoted by the enum and it maps to shard database 0. One could maintain a map for the same if required.
For example, the hibernate configuration for the Indian database is as follows:
<hibernate-configuration>
 <session-factory name="HibernateSessionFactory0">
  <property name="dialect">org.hibernate.dialect.HSQLDialect</property>

  <property name="connection.driver_class">org.hsqldb.jdbcDriver</property>
  <property name="connection.url">jdbc:hsqldb:mem:shard0</property>
  <property name="connection.username">sa</property>

  <property name="connection.password"></property>
  <property name="hibernate.hbm2ddl.auto">update</property>
  <property name="hibernate.connection.shard_id">0</property>

  <property name="hibernate.shard.enable_cross_shard_relationship_checks"> true </property>
  <property name="hibernate.show_sql">true</property>
  <property name="hibernate.format_sql">true</property>

  <property name="hibernate.jdbc.batch_size">20</property>
 </session-factory>
</hibernate-configuration>
In order to route data to the appropriate database for storage, we define a custom shard selection strategy that uses the country code of the NationalPlayer object to route persistence to a particular database as shown below:
public class ShardSelectionStrategy extends RoundRobinShardSelectionStrategy {
  public ShardSelectionStrategy(RoundRobinShardLoadBalancer loadBalancer) {
    super(loadBalancer);
  }

  @Override
  public ShardId selectShardIdForNewObject(Object obj) {
    if (obj instanceof NationalPlayer) {
      ShardId id = new ShardId(((NationalPlayer) obj).getCountry().toInt());
   
      return id;
    }
    return super.selectShardIdForNewObject(obj);
  }
}
To easily work with Hibernate, we have a HibernateUtil class that factories shard sessions and also Sessions to individual databases. How Shards are selected/delegated to can be customized by providing an implementation of the interface ShardStrageyFactory. In the example, we have only chosen to customize the selection strategy.
To test our example, we have some unit tests. The tests will ensure the following are working:
  • When Players are persisted, they are stored only in the appropriate database instance. In order to ensure the same, a direct connection to the target sharded database is obtained to ensure the existence of the persisted player. In addition, direct connections to the other databases in the shards are obtained to ensure that the player in question has not been inserted there.
  • Queries executed against the shard will obtain data from all the sharded databases accurately. The tests will ensure that all databases in the shard are being accessed.
The Players in the unit test are as follows:
NationalPlayer indiaPlayer = new NationalPlayer().withCountry(Country.INDIA)
   .withCareerGoals(100).withFirstName("Sanjay").withLastName("Acharya").withWorldRanking(1);

NationalPlayer usaPlayer = new NationalPlayer().withCountry(Country.USA)
   .withCareerGoals(80).withFirstName("Blake").withLastName("Acharya").withWorldRanking(10);

NationalPlayer italyPlayer = new NationalPlayer().withCountry(Country.ITALY)
      .withCareerGoals(20).withFirstName("Valentino").withLastName("Acharya").withWorldRanking(32);
The Persistence test is the following:
@Test
public void testShardingPersistence() {
    BigInteger indiaPlayerId = null;
    BigInteger usaPlayerId = null;
    BigInteger italyPlayerId = null;
    
    // Save all three players
    savePlayer(indiaPlayer);
    savePlayer(usaPlayer);
    savePlayer(italyPlayer);
    
    indiaPlayerId = indiaPlayer.getId();
 System.out.println("Indian Player Id:" + indiaPlayerId);
    
    usaPlayerId = usaPlayer.getId();
    System.out.println("Usa Player Id:" + usaPlayerId);
    
    italyPlayerId = italyPlayer.getId();
    System.out.println("Italy Player Id:" + italyPlayerId);
    
    assertNotNull("Indian Player must have been persisted", getShardPlayer(indiaPlayerId));
    assertNotNull("Usa Player must have been persisted", getShardPlayer(usaPlayerId));
    assertNotNull("Italy Player must have been persisted", getShardPlayer(italyPlayerId));

    // Ensure that the appropriate shards contain the players    
    assertExistsOnlyOnShard("Indian Player should have existed on only shard 0", 0, indiaPlayerId);    
    assertExistsOnlyOnShard("Usa Player should have existed only on shard 1", 1, usaPlayerId);    
    assertExistsOnlyOnShard("Italian Player should have existed only on shard 2", 2, italyPlayerId);    
}
The Simple Criteria based tests are the following:
 @Test
  public void testSimpleCriteira() throws Exception {
    Session session = HibernateUtil.getSession();
    Transaction tx = session.beginTransaction();

    try {
      Criteria c = session.createCriteria(NationalPlayer.class)
        .add(Restrictions.eq("country", Country.INDIA));

      List<Nationalplayer> players = c.list();
      assertTrue("Should only return the sole Indian Player", players.size() == 1);
      assertContainsPlayers(players, indiaPlayer);
      
      c = session.createCriteria(NationalPlayer.class).add(Restrictions.gt("careerGoals", 50));
      players = c.list();

      assertEquals("Should return the usa and india players", 2, players.size());
      assertContainsPlayers(players, indiaPlayer, usaPlayer);

      c = session.createCriteria(NationalPlayer.class)
        .add(Restrictions.between("worldRanking", 5, 15));

      players = c.list();
      assertEquals("Should only have the usa player", 1, players.size());

      assertContainsPlayers(players, usaPlayer);
      
      c = session.createCriteria(NationalPlayer.class)
        .add(Restrictions.eq("lastName", "Acharya"));
  
      players = c.list();

      assertEquals("All Players should be found as they have same last name", 3, players.size());
      assertContainsPlayers(players, indiaPlayer, usaPlayer, italyPlayer);

      tx.commit();
    } catch (Exception e) {
      tx.rollback();
      throw e;
    } finally {
      if (session != null) {
        session.close();
      }
    }
}
Running The Example: The example is a Maven 2 project. JDK 1.6.X was used to develop/test. Database of choice for the example is HSQL, why would anyone select Oracle or anything other database :-). One can obtain the project from HERE. Simply run "mvn test" to see the tests being run and/or import the code into Eclipse using Q4E or your favorite maven eclipse plugin. As a note, the examples are simply examples.
The Parting: A requirement to using Hibernate Shards is Java 1.5 or higher. I am not quite sure why this requirement exists as one does not necessarily need to use annotations or java 5 features for hibernate configuration. A todo for me. Session Factory configuration via JPA is apparently not supported as yet. Different ID generation strategies can be used. One interesting problem in sharding is sorting of the results. Hibernate Shards works around the same by insisting that all objects returned by a Criteria query with Order By clause implement the Comparable interface. Sorting will only occur after obtaining the result set from each member of the shard within Hibernate Shards. From the documentation, it appears that "Distinct" clauses are not supported as yet. HQL support is not well supported as yet. For example, I wanted to execute a "delete from NationalPlayer" across the shard, i.e., clear the table on every database. I could not execute the same with an unsupported exception being the result. The documentation recommends staying away if possible. Cross Sharding is not yet supported. What this means is that if Object A is on shard 1 and object B is on shard 2, one cannot create an associated between them. Have not encountered a need for this yet. Links,
Hibernate Shards Documentation
Nice Read on Sharding
Now only if I can get my Jersey example working...

Wednesday, September 24, 2008

JAXWS Example with Maven and Spring

Introduction:
For the past few months, I have been quite far away from the SOAP world with my primary focus being on REST looking at JAXWS implementors, Restlet etc. The other day, I attended the Utah Java User's Group (UJUG) meeting. The UJUG is managed superbly by Mr.Chris Maki and I quite enjoyed myself when there with the presentations. The presentations centered around how different companies implemented web services and their stack.

One company in particular has me interested in their Web service Stack. The LDS church uses JAXWS with JDK1.6. The presenter, although did not enter into details, did however mention how easy it is to create and consumer SOAP web services with JAXWS. One of the things he mentioned was that they did not need to step outside JDK in order to be able to create SOAP based WS. They did not feel the need for external impls like CXF etc.

In the projects that I have worked with SOAP, the words, simple or easy have never come to mind :-) So what was this gentleman talking about? I had to find out. Would have loved to chat with him more but could not, so here I go finding out the hard way :-).

JAXWS:
So what is JAXWS? JAXWS (Java API For XML Web Services) can be thought of as a
Java Programming API to create Web Services. JAXWS was introduced in JAVA SE 5 and
uses Annotations in the creation of end points and service clients. JAXWS 2.0 replaced or
encompassed the JAX-RPC API. For more details on the same look at this developer works article.
JAXWS uses JAXB 2.0 for data binding.

Simple JAXWS Example:
Like my JAXRS Maven/Spring examples, the JAXWS web service is implemented using a
multi module maven project with a structure as shown below:

jaxws-example
|_ client - Contains generated code from WSDL
|_ service - Contains model and business logic
|_ webapp - Contains Web service wrapper, servlet defs etc

For the example, I am using the JAXWS-Maven plugin.
The plugin has two goals that the example uses, a. wsgen, that reads a service end point
class and generates service artifacts and b. wsimport used to generate the consumer code.

The webapp maven module contains JAXB annotated DTO's like an OrderDto and LineItemDto. The
Web service is exposed via a simple wrapper class:

 @WebService
 public class OrderWebService {
   private static final Logger log = Logger.getLogger(OrderWebService.class);
   @Autowired private MapperIF beanMapper;
   @Autowired private OrderService serviceDelegate;
   ...

   public OrderDto getOrder(Long orderId) throws OrderWebException {
     Order order = null;
   
     try {
       order = serviceDelegate.getOrder(orderId);
     }
     catch (OrderNotFoundException nfe) {
       throw new OrderWebException(nfe.getMessage());
     }
    
     OrderDto orderDTO = (OrderDto) beanMapper.map(order, OrderDto.class);
     return orderDTO;
   }
   ...
}

The @WebService annotation indicates that the above class is a Web Service end point. The Service delegate
does the heart of the service work with the OrderWebService class acting only as a front. As
seen above, we are using Spring and auto-wiring for the delegate and Bean mapper. Just
one other thing to note is that we have a class called OrderWebException, this is a class
annotated with @WebFault indicating this exception is raised when there is a web fault.

I have a layer which is a webservice layer and an actual service layer as I feel there
are some advantages to such an approach. I have discussed the same in my earlier blogs.

During the maven build process, the jaxws maven plugin will come into play and generate
service artifacts. The maven plugin is configured as follows:

 1 <plugin>
 2   <groupId>org.codehaus.mojo</groupId>

 3     <artifactId>jaxws-maven-plugin</artifactId>
 4        <executions>
 5     <execution>

 6       <goals>
 7         <goal>wsgen</goal>
 8       </goals>

 9      <configuration>
10        <packageName>com.weflex.service</packageName>
11         <!-- The name of your generated source package -->
12        <sei>com.welflex.soap.ws.OrderWebService</sei>

13        <keep>true</keep>
14        <genWsdl>true</genWsdl>
15       </configuration>
16         </execution>

17      </executions>
18 </plugin>


The generator will generate the WSD and web service artifacts. The schema itself is created
at ${base.dir}/target/jaxws/wsgen/wsdl.

One point that is worth noting is that, in order to launch the web service for testing,
there is no need for a separate Servlet container, web.xml etc. JDK 1.6.X provides for
a simple container that will allow you to test the service. One can bind the service using:

@WebService 
public class OrderWebService {
   .....
  public static void main(String args[]) {
     javax.xml.ws.EndPoint.publish("http://localhost:8080/ws", new OrderWebService());
  }
}


Viola, then you connect to the service, using SOAP UI or your favorite SOAP tool and test the service.

Now the client code. Unlike in my rest examples where one wrote the client code explicitly, in this example, the client is entirely generated from the service WSDL. There is no
reason to use auto generation. Seems easy to me though.

The wsimport goal can create a Web Service client when provided a WSDL. The client project is dependent on the webapp project. Once the webapp project
is built a WSDL is created in the above mentioned target directory. The plugin on the
client is configured to point to this WSDL in order to generate client side artifacts as shown
below:

 1 <plugin>
 2   <groupId>org.codehaus.mojo</groupId>

 3   <artifactId>jaxws-maven-plugin</artifactId>
 4   <executions>
 5     <execution>

 6       <goals>
 7         <goal>wsimport</goal>
 8       </goals>

 9       <configuration>
10         <wsdlUrls>
11    <wsdlUrl>       ../../jaxws-example/webapp/target/jaxws/wsgen/wsdl/OrderWebServiceService.wsdl
12    </wsdlUrl>

13  </wsdlUrls>
14        </configuration>
15     </execution>
16    </executions>
17 </plugin>

Now that we have the client, the integration test project can run which starts the
web service and issues requests from the client to the service.

The integration test creates a client via the following call:

URL wsdlLocation = new URL(INTEGRATION_TEST_URI + "?wsdl");
ORDER_CLIENT = new OrderWebServiceService(wsdlLocation,
      new QName("http://ws.soap.welflex.com/", "OrderWebServiceService"))
         .getOrderWebServicePort();

The default generated client internal URL points to our file system. The above the URL
the same to point to an actual server instance.
Running the Example:
You need to be on JDK 1.6.X, maven 2.0.8 installed. From the root of the project directory,
issue a "mvn install" and you should see the entire project being built with the
Integration Tests running. The Project itself can be downloaded from HERE.

Thoughts:
Some of my friends will echo my thoughts with the pains they have had with Web service
creation using certain vendor tools. The promise is always, "Watch while I create a
Web Service in just two minutes!". JAXWS also tends to address performance of SOAP
WS. Take a look at the article on implementing high performance web services with jaxws.

With JAXWS, I do not have to extend any particular class in order to expose a POJO as
a Web Service. The ability to instantly test my web service without the need for an
external Servlet container starting up, +1. The ease of generating the Client code,
another +1. Maven Plugin +1. Spring Integration +1. Easy deployment without a heavy
weight container +1. The plugin however did not seem to provide for easy mapping of
name spaces to custom java packages with wsimport.

JAXWS also provides for easier than before implementation of WS-* features if you need them.

I am specially interested in the performance characteristics of JAXWS using JAXB 2.0. I have always believed that one of the major performance penalties paid with SOAP Web service have been on the marshalling/unmarshalling fronts. I need to do some performance benchmarks I guess.
Enjoy!