Saturday, August 30, 2008

J2EE EAR File Structure

A diagram from IBM Websphere redbook. I just keep it here to remind my rusted high performance brain.

Top Blogs

Thursday, August 28, 2008

JNDI Application Client in WAS 6.1 - Part 2

Continued from my previous post, this post attempts to address some of technical intricacies that you might face when try to authenticate yourself outside Java EE containers when doing naming operations.

Previously, I simply assigned the necessary rights to EVERYONE in Websphere Application Server V6.1 Administrative Console to enable anyone (Including those Unauthenticated) to perform naming operations.

However, this setting is not appropriate in production environment because some operations such as removing bindings and create new bindings are considered as privileged operations that require thoughtful considerations.

Assuming that you are created a new WAS User named "NamingUser1" and assigned this user with relevant rights (i.e. CosNaming Delete, etc)

Now the trick is to pass these credentials to WAS from the Java program.

So, the first mistake that might happened is you assumed the following codes will work:

Hashtable env = new Hashtable();

env.put(Context.PROVIDER_URL, "corbaloc:iiop:localhost:2810/NameService");

env.put(Context.SECURITY_PRINCIPAL, "NamingUser1");
env.put(Context.SECURITY_CREDENTIALS, "password1");

No, this will not work. You will shoot by the below exception:

javax.naming.NoPermissionException: NO_PERMISSION exception caught [Root exception is org.omg.CORBA.NO_PERMISSION:
>> SERVER (id=11c328fe, host=eddy) TRACE START:
>> org.omg.CORBA.NO_PERMISSION: Caught WSSecurityContextException in WSSecurityContext.acceptSecContext(), reason: Major Code[0] Minor Code[0] Message[ null] vmcid: 0x49424000 minor code: 300 completed: No
>> at
>> at
>> at
>> at
>> at
>> at
>> at


You need to use JAAS to perform the authentication.

To make the case clearer, let us focus on the following source codes:

package lab.namespace;

import java.rmi.RMISecurityManager;
import java.util.Hashtable;
import javax.naming.Context;
import javax.naming.InitialContext;


public class Connect {
public static void main(String[] args) throws Exception {
Hashtable env = new Hashtable();
env.put(Context.PROVIDER_URL, "corbaloc:iiop:localhost:2810/NameService");
final Context initialContext = new InitialContext(env);

LoginContext loginContext =
new LoginContext("WSLogin",new WSCallbackHandlerImpl("NamingUser1","password1"));

loginContext.login(); s = loginContext.getSubject();

WSSubject.doAs(s, new PrivilegedAction(){
public Object run() {
initialContext.bind("hello", "1234");
} catch (Exception e) {
return null;




You also need to add the following JARs to the class path:


Then you crossed your finger and execute the program again. And yet it still fails.

Exception in thread "P=172625:O=0:CT" java.lang.SecurityException: Unable to locate a login configuration
at java.lang.Class.newInstanceImpl(Native Method)
at java.lang.Class.newInstance(
at lab.namespace.Connect.main(
Caused by: Unable to locate a login configuration
... 10 more

Now you shall setup JAAS specific environment.

Copy the following files to your workspace:


Modify the sas.client.props

Note: Here I assume that the bootstrap port is 2809.

Modify the ssl.client.props


Note: Here I just used the same keystore from the server. It might not be the case for production environment.

You will also need to modify the source code to include the following line:

System.setSecurityManager(new RMISecurityManager());

Create one new file named "security.policy" and specify the following in it.

grant {

Note: This is just for demostration purposes. You should tune the security policy instead.

Lastly you must add few JVM arguments for execution.${YOUR_PATH}\wsjaas.conf${YOUR_PATH}\sas.client.props${YOUR_PATH}\security.policy${YOUR_PATH}\ssl.client.props

Potential Mistake #2: JVM argument ""

The value specified for this argument must start with "file:" for file URL. Fail to do this will make your head spin.

Finally, the program will be successfully executed and the new String object is bound to the name space. You can use dumpNameSpace utility to verify this.

Good luck.

Top Blogs

Wednesday, August 27, 2008

JNDI Application Client in WAS 6.1

If you deployed your application into the web container or EJB container of the J2
EE/Java EE application server and you try to perform naming operations, basically the container already setup all necessary environment configuration for you to obtain the initial context. Initial context basically is the starting point in the name space that you want to manipulate. In Websphere, an initial context can be treated as a connection to the name server where the connection is defined by the bootstrap host, bootstrap port and protocol as part of the provider URL.

In the event that you need to develop application client that runs outside the containers, then you shall need to configure this connection yourself or as part of the application assembly process assisted by the tool. AST and Rational Application Developer can do all these dirty works for you.

However, under the circumstances that you don't have the luxury of using these tools, then you really need to know what to do.

In this example, I'm using IBM Websphere Application Server V6.1 with FP0.

Sample code:

package lab.namespace;

import java.util.Hashtable;
import javax.naming.Context;
import javax.naming.InitialContext;

public class Connect {

public static void main(String[] args) throws Exception {

Hashtable env = new Hashtable();
env.put(Context.PROVIDER_URL, "corbaloc:iiop:localhost:9809");

Context initialContext = new InitialContext(env);
Context myCtx = (Context)initialContext.lookup("cell/persistent");
myCtx.bind("hello", "123");


Without additional information, when you execute this program, you shall hit the first error:

Exception in thread "main" javax.naming.NoInitialContextException: Cannot instantiate class: [Root exception is java.lang.ClassNotFoundException:]
at javax.naming.spi.NamingManager.getInitialContext(
at javax.naming.InitialContext.getDefaultInitCtx(
at javax.naming.InitialContext.init(
at javax.naming.InitialContext.(
at lab.namespace.Connect.main(
Caused by: java.lang.ClassNotFoundException:
at java.lang.Class.forName(
at com.sun.naming.internal.VersionHelper12.loadClass(
at javax.naming.spi.NamingManager.getInitialContext(
... 4 more

You can solve this by adding the appropriate JAR into your class path. Locate ws_runtimes.jar at


You can try to execute again, and this time possibly you will hit the second error:

Exception in thread "main" java.lang.NoClassDefFoundError: com/ibm/CORBA/iiop/ObjectURL
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Unknown Source)
at javax.naming.spi.NamingManager.getInitialContext(Unknown Source)
at javax.naming.InitialContext.getDefaultInitCtx(Unknown Source)
at javax.naming.InitialContext.init(Unknown Source)
at javax.naming.InitialContext.(Unknown Source)
at lab.namespace.Connect.main(

The stack trace pointed out that the program need more JARs.

Here you have 2 choices as solution to the problem.

1. Change your JRE from SUN to IBM

To do this in Eclipse, go to Windows->Preferences->Java->Installed JREs

And specify the location of IBM JRE


And set this JRE as default JRE.

2. Add in only the specific JARs in your class path

Locate the following JARs:


High chance is that you still encounter another error after you added the above JARs.

WARNING: jndiNamingException
Exception in thread "P=762318:O=0:CT" javax.naming.NoPermissionException: NO_PERMISSION exception caught [Root exception is org.omg.CORBA.NO_PERMISSION:
>> SERVER (id=144d42ac, host=eddy) TRACE START:
>> org.omg.CORBA.NO_PERMISSION: Not authorized to perform bind_java_object operation. vmcid: 0x0 minor code: 0 completed: No
>> at
>> at
>> at
>> at
>> at
>> at
>> at
>> at
>> at
>> at
>> at
>> at$
>> SERVER (id=144d42ac, host=eddy) TRACE END.
vmcid: 0x0 minor code: 0 completed: No]
at lab.namespace.Connect.main(
Caused by: org.omg.CORBA.NO_PERMISSION:
>> SERVER (id=144d42ac, host=eddy) TRACE START:
>> org.omg.CORBA.NO_PERMISSION: Not authorized to perform bind_java_object operation. vmcid: 0x0 minor code: 0 completed: No
>> at
>> at
>> at
>> at
>> at
>> at
>> at
>> at
>> at
>> at
>> at
>> at$
>> SERVER (id=144d42ac, host=eddy) TRACE END.
vmcid: 0x0 minor code: 0 completed: No
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(
at java.lang.reflect.Constructor.newInstance(
at org.omg.CORBA.portable.ObjectImpl._invoke(
... 2 more

The exception occured due to the fact that WAS V6.1 control the usages of CORBA Naming Service to only those users/groups who is assigned the following rights depending on the action.

Cos Naming Read, Cos Naming Write, Cos Naming Create, Cos Naming Delete

You will need to acess Administrative Console (Integrated Solution Console) to grant the rights.

1. Open Administrative Console
2. Go to Environment -> Naming -> CORBA Naming Service Groups
3. Click on the EVERYONE group (This is just for simplicity, you can use other group)
4. Select the right(s)
5. Save and restart the server.
6. Modify the original program

package lab.namespace;

import java.util.Hashtable;
import javax.naming.Context;
import javax.naming.InitialContext;

public class Connect {

public static void main(String[] args) throws Exception {

Hashtable env = new Hashtable();
env.put(Context.PROVIDER_URL, "corbaloc:iiop:localhost:9809");

Context initialContext = new InitialContext(env);
Context myCtx = (Context)initialContext.lookup("cell/persistent");
myCtx.bind("hello", "1234");



You're done. Good luck.

Top Blogs

Geek! SQL or Java?

Typical hard core (and perhaps inexperience) OO developer will despise others that put business logics in somewhere not their sphere of influences, well, let say in database server in the form of SQL stored procedures or UDF. To a certain extent, me as an overlord in OO realm (:p), agreed on this obstinated design decision, but sometime over engineering an application design will bring only undesirable consequences.

Design is all about trade offs, trade offs and trade offs. There is time when you should capitalize on platform specific features to make your life simpler and meeting the supposed functionalities. Remember the MERGE statement in my previous blog, LOL

Putting logics in database layer have advantages and disadvantages. Especially in this age of messy system integrations and consolidations, more than one application tends to access the same relational data sources.

You should notice by now around you, if the corporate IT environment involves lots of enterprise applications, that exact same logics were replicated across horizontally aligned systems. Changes are foreseen to be challenging.

The good news is you can use your existing knowledge and put it upfront in IBM DB2 environment. Support for Java Stored Procedures and UDFs is there for quite sometime, but their popularity still not exceptionally high, maybe due to the fact that data layer programmers' preference on SQL as prime language.

Basically what you need to do are (For Windows):

1. Put your logic in a Java class' static methods
2. Copy the compiled bytecodes in the specified DB2 directory, depending on whether it is FENCED or UNFENCED (for UDFs). or call a system SP to install the packaged JAR that contains the bytecodes.

3. Issue CREATE FUNCTION/PROCEDURE with proper options to the external Java methods.

That's all. Simple right? Of course NO, there are other critical considerations when using Java SP/UDFs.

Check out the REDBOOK here

Good luck, Folks.

DB2 Changing Statement Terminator Symbol

When you pass in -t command option to DB2 CLP, the delimiter ; (Semi colon) is turn on for enabling you to enter multiple statement lines before submitting it. This is typically acceptable behavour when you need to submit a single statement that encompassing multiple lines in the console.

What if you need to submit a large and complex stored procedure creation script to CLP? You can save the codes in a physical file then use db2 -t -f myProc.sql.

Then you hit errors that doesn't make sense at all.

Most probably the query parser is confused with the statement terminator and your substatement terminator. Substatements, such as those that you embedded in your stored procedure are forced to use ; as their termination character. In this case, when the parser encounter the first embedded semicolon, it would think that the statement is ready for parsing. Ta da. it strew up.

So, a better command would be "db2 -td# -f myProc.sql" assuming you are using # symbol as your statement terminator.

Yet, you might face another issue of encountering multiple different statement terminator in the same CLP session.

So you decide to

db2 -td#
select * from syscat.tables#

db2 -td$
select * from syscat.columns$

This example is trivial, but you get my point.

So, is there a better solution? O yeah, you can use one of the DB2 Control Option in the form of:


For example:

db2 -t

Similar approach can be adopted in Java DB2 programming by submitting it as part of the query you sent to DB2.

Generalization and Specialization

At some stages in life, you will suddenly have a desire to do a turn-around in your career, whether to continue specializing something or generalizing to handle more tasks. Face it, people who are specialized in an area for ages will have difficulties in adapting to the idea of multi-area + multi-process + multi-tasking. The same happened for generalized worker, they might have phobia of scaring entering a job dead-end or ceasing of learning curve. Anyway, ignore what I had said, these are just crappy murmuring.

Here's the meat. There are reasons why you need certified and well trained personnel to deploy your applications into production environments. One of it is that they always got some well-kept secrets that make them different from typical persons who try to be hero or force to be. For the sake of goodness of your enterprise, pay whatever that is necessary to get the job done properly. Cheapskate is not the way to survive.

Like thousands of others outside, me as a generic person, sometimes need to do stuff i'm not good at (or at least not at the current moment). Last week, I setup a Websphere Application Server in a machine which is Windows domain member. Naively, I just do whatever I did in the company test environment, thought it will turns on and run flawlessly. Well, most of the features do.

I had this problem of obtaining list of Windows groups and users from the LocalOS registry when try to map security roles in the deployed applications. WAS smartly returned me "*null" message on the screen and some "User not found" or "Not authorized" or "Password something" in the logs.

Stratching my nearly bald head and suspecting something to do with Windows domain, I look up the WAS 6 information center, and search for LocalOS registry. The fact is that WAS has different setting requirements for standalone machine, domain member and domain controller if you are using LocalOS.

A quick fix will be to add "" custom property with the value "local" to the LocalOS custom property sheet. This will explicitly stop the WAS from querying domain registry for list of groups and users (That's my requirement, your's might be different). If you want it to get the list from both domain and local registry, then read on the documentation, there are a list of things to set up for the user who starts WAS process.

This is just one issue that I encountered so far, however just cross my finger and hopes there are no others.

The risks of being not specialized.

Alphablox Cube Security

I really headache when customer want to impose security constraints on Alphablox solution. Here is a simple security problem that I need to solve.

A local bank in Malaysia has 14 main branches throughout the country where each state got one main branch. There is at least 2 groups of users with different level of data access. One group is country wide users who can view the data for all the states. Another group being state manager which can only sees their own cake.

My first attempt is defining a user property called BelongingState and assign it state code that user belongs to, or "ALL" if they are country wide users. Then I would use Java codes to dynamically construct the MDX for the DataBlox. Thought this will solve my problem, but some user actions would causes rewrite of the MDX automatically (Seems like the default behavior). For example, Show Siblings action will display all members on the same level. This totally defeats my aim of controlling view by Malaysia state. Another one is the Drill Up action. I also found Member Filter dialog and Drill Down (There are 5 types of Drill Down available) action can causes exploitation on data.

Due to lack of time to spend on digging deep into Alphablox object model, I revert to a quick resolution: use removeAction attribute to disable Drill Up, Member Filter and Show Siblings for state level users. At the same time, I wrote a filter on Drilldown event to check the drill down option and prevent the drill down to inappropriate data area. This cut off a lot of interactivity from user, but for the sake of security, some trade offs gotta be made.

Giving some faith to developers who wrote Alphablox framework, I believe there should be some methods to disable MDX rewrite due to user activity, thus preventing the underlying data set from changing. One thing to note if that if you are using MSAS or Essbase as cube source, then you can rely on the native security control features of these cube engines for security. Particularly you can use MemberSecurity tag for this purpose. For Cube View based cubes, may be it could be possible to control the view at database level.

Back to square one, my point of bringing this topic up is that it is important to plan for security requirements to match the out of the box security features provided by Alphablox. For my simple security scenario, I personally think that it is more usable to create 14 identical cubes, each for different state. By controlling which cube that users use in their report, you can be sure that the data view doesn't violate the security expectation, yet retaining powerful interactivity actions such as Member Filter and Drill Up. Things can get pretty ugly and complicated when the security requirements are not just on one dimension. In this case, creating separate cube for different data view might not be practical.

Lastly, localization is another aspect of global business intelligence application that can be as tricky as security factor to implement in Alphablox solution. I really need to appraise Microsoft Analysis Services (MSAS) for incorporating security and localization so well in their cubes. Maybe this is what differentiates market leader and players.

My Preciousssssss (Will it be Ruby?)

Ok, here's the question.

Which one of the following is programmer's best friend? (See hints)

a. Ruby
b. Java


Java's logo

Ruby's logo


Well, it's sort of depends on whether you as a programmer like to drink coffee, especially Java type, or you rather earn more money (by selling Ruby?) and eventually afford to drink more exotic coffee than Java. For me, I think it's no harm to do both things at the same time, and why not. :p

Enable Commenting (Comments)

An excerpt from Alphablox 8.4 documentation:

CommentsBlox allows you to provide cell commenting (also known as cell annotations) functionality to your application. In addition, you can use CommentsBlox for general commenting that are not tied to any other Blox. For example, you can allow users to add comments to a site, an application, a report, or a Web page.

Comments are stored in a JDBC accessible relational database. Supported databases include IBM(R) DB2(R) UDB, Sybase, Microsoft(R) SQL Server, and Oracle. This data source needs to be defined to DB2 Alphablox. DB2 Alphablox provides a Comments Management page under the Server link in the DB2 Alphablox Administration tab that lets you specify the relational data source to use for storing comments. From that page, you can create "collections" (data tables) to store comments. For cell-level comments, you will need to specify the multidimensional data source used in your GridBlox, the cube to use (for Microsoft Analysis Services), and the dimensions to include. For general comments, you only need to specify the name.

Here is what you need to do to get a feel of the built-in CommentsBlox in Alphablox.

First, go to Alphablox Administration Site, by default should be accessible from http://yourserver:9080/AlphabloxAdmin/

From the Administration main tab, you should be in General paage by default. There is a Runtime Management section, amongst others. Click on the Comments link under this section. A pop-up window should appear that refer to http://yourserver:9080/AlphabloxAdmin/comments/commentsAdmin.jsp.

Here you gotta define the comments collection (sort of like a repository to store a particular set of comments) and maybe customize the set of fields used for commenting.

Choose a data source that you already defined elsewhere, enter the security credential and click Connect button. All existing collections should appear in the list provided.

Click Create button, since you want to create a new collection. You need to fill in the details for the form provided, particularly you need to specify the dimension to enable the comments and the fields required for the comment entry.

Click Save once you done.

Ok, you have done with the necessary configuration.

In your Blox programming, you can do the following:

1. Enabling Commenting for GridBlox
<blox:data><blox:comments collectionName="YourCommentCollectionName" dataSourceName="TheDataSourceUsedForCollection" /></blox:data>

<blox:grid commentsEnabled="true" />

Then when you right click on your grid cell, an option called "Comments" will appear.

2. Use com.alphablox.blox.CommentsBlox and related com.alphablox.blox.comments.*

This allows you to implement Commenting feature for your relational reports or any other general usage. Requires programming though.

Mistake Mistake

The multithreaded Java application I wrote in April 06 for extracting data from 90k files to IBM DB2 v8.2 system suffering from performance bottleneck when dealing with too many worker threads. This is a common issue for multithreading programming and that's why we got something called Thread Pool. While waiting for the extraction to be finished (God, it has been running for 30 hours and still executing at this moment), I took some initiatives to perform a code review, trying to identify possible programming faults. And oh yeah, found a potential program problem. See figures below

What you think is wrong with the above code snippets? Well, I try to recall the rationale of doing the stuff like that when I was developing this program and I think I did that to avoid the intricacy of thread synchronization due to very constrained assigned time frame.

I wouldn't tell you what's wrong with it and I hope you can spend some time exploring it out. However I can tell you that one of the way to get out from this scalability issue is to using java.lang.ThreadLocal to manage the connection instead.

Tons of POS Transaction File

For the purpose of POC for a retail company's data mining initiative (again), I being trashed with 90k number of small files containing 1 year POS transactional data, nearly double the amount of files/data for previous POC effort. Almost 70% of these files are compressed in Z format. Made a quick study on package provided in Java SDK 1.4.2 (The package is available from SDK 1.1), no luck, the standard facility only supports ZIP and GZIP formats. Ok, fine. I made a search in sourceforge, looking for any open source java implementation. None of the search results are directly useful to my decompression need. Then, I tried Winzip, WinRAR, PowerZip and etc. Emmmm, most of the Windows GUI version of these programs are able to decompress Z archive, however none of them provide batch processing. Darn, I not going to decompress 90k files one by one, am I look like that dumb? Ok, thinking of command line version of Winzip and Winrar. Ooops, unfortunately enough, they don't supports Z format in their command line version.

Decided to do some research on Z archive and found this useful article

Uncompress gz and Z format

It seems to me that Z format and many other compression formats are natively supported by UNIX systems. So sad for Windows users.

Also, read this
Wikipedia: List of archive formats

Uppercase .Z is a different format compared to lowercase .z file. Generally .Z is produced using UNIX's compress command, whereby .z is by UNIX's pack command. Algorithms used for the compression are different too.

Since I only got limited time for this decompression task, I finally settled with GUNZIP program, that's freely available ( and performed a batch decompression. Proceed to the ETL phase then.

And here is a forum post that I found stating similar decompression requirement. Most probably I will use Runtime.exec to call out external utility such as GUNZIP, rather than trying to find a Java implementation for integration. Anyway, it's depends on the amount of time I have.

Similar Issue

Is Your Core Business IT System Crucial?

Imagine a large automobile sales, services and supports organization with tons of branches and dealers to manage. Lots of functional IT systems are running concurrently and yet they interacts with each other and to external parties. Suppliers, employees, customers and management are relying on these heterogeneous systems to carry out their daily routine.

In this situation, I need to ask again: Is your core business IT System crucial? Added up to that question, Can your enterprise systems evolve? The reasons for software evolution are varying but they all serve one ultimate purpose: to make sure the business continues to survive.

What you think will happen if one day they need to:

1. Add a new functional module to existing system

2. Customize existing workflow

3. Deploy a feature enhancement

4. Patch a found bug

5. Interface with a new external authority party

6. Optimize runtime performance

.... And so many else.

As many real world examples pointed out, many enterprise projects are unsuccessful due to one main reason: They can't evolve in a way that's effective, efficient and minimized risk.

Enterprise applications are totally different from many other sorts of software. As an enterprise architect, you gotta think in a macroscopic level and execute in microscopic details.

In past one month, I was involved in such a scenario where a team of programmers is assigned the mandate to evolve a business core system, mainly in the area of bug fixing and functionality enhancement.

Some of my findings are summarized as below:

The Good:
1. Architectually, the system is well designed with plenty of design patterns applied and plenty of property files (It might not be good, heard about XML configuration hell?) available for attaching loosely coupled and late bound components.

2. Well known open source frameworks and components were used. Struts, Hibernate, Apache Commons, Log4J, WebDoclet, etc.

3. Presentation tier and business tier are relatively well encapsulated with defined access points.

4. The use of ThreadLocal for managing Hibernate session

5. The use of dynamic proxy to manage DAO method invocations (Though the implementation is less than satisfactory)

(Trying very hard to think out more.... But can't seems to find any other good point)

The Bad:

1. Running on IBM WebSphere 5.1, developers actually mixed the usage of Struts TagLib and JSTL. Personally, I wouldn't do that, as many of JSTL functionalities overlapped with Struts related taglibs and since JSTL is not part of the JSP 1.2 yet, my advice is to stick with Struts only. Putting in JSTL in this case will make the deployment difficult and future upgrade to newer version of J2EE container more troublesome.

2. Unnecessary coupling between the core framework classes with web application related classes. Imagine one of your core engine class actually reference your /WEB-INF/classes's class. Cyclic dependencies cause the original ant script to fail.

3. Existence of two separate mechanisms to obtain database connections. One is using hibernate session, another is direct JNDI lookup. Hey dude, can't you just use one? In the new version of the system, I refactored the codebase to use only the hibernate session because previous JNDI lookup mechanism involved plenty of property configuration files, each for different components (HUH?).

And more....

The Evil: (We got our winners here)

1. Drag and drop deployment mechanism
- I was fainted when first saw this deployment process. To deploy new classes to the production server, you "drag and drop" your classes directly through FTP interface to their respective directories. Imagine you got 10+ updated and new classes in different java package need to be deployed, how inefficient it is to find each of the respective directory, drop the file, cross your finger and hope the container load it properly and continue with the next.

2. Minimal or no documentations
- Well, no need to speak much about this. Even the business process is not properly documented and most of the time you will like "Huh, how come the button still disabled? Do I need to perform anything anywhere else?"

3. Not a single unit test
- Without unit test, how can you confidently perform your integration test? In fact, high latency was resulted from the minor configuration issue like miss out a mapping entry in hibernate configuration file, which is suppose to be discovered during unit testing.

4. Business requirements not properly logged.
- From time to time, developers and analysts asking themselves: Emmmm, this looks familiar, was it implemented before? Oops, I can't really tell, grant me 2 days to trace through the code again.

5. Problematic concurrent transactions
- Lost Updates did happen. E.g. Stock quantity discrepancies

6. Poor Exception Handling
- I don't even want to mention about this

To be continued. Stay Tune. :)

The ten fallacies of enterprise computing

Continuing from the previous journey in Java Reflection In Action, I've chosen another route into enterprise computing. Effective Enterprise Java by Ted Neward, if you insisted want to know the book name. Below is an excerpt of the listing of the ten fallacies of enterprise computing from the book's chapter 1.

1. The network is reliable.

2. Latency is zero

3. Bandwidth is infinite

4. The network is secure

5. Topology doesn't change

6. There is one administrator

7. Transport cost is zero

8. The network is homogeneous

9. The system is monolithic

10. The system is finished

On these 10 rules is much of this book built. Effective Enterprise Java: Books: Ted Neward

J2EE and Java EE. Which one sounds better?

The name of the Java platform for the enterprise has been simplified. Formerly, the platform was known as Java 2 Platform, Enterprise Edition (J2EE), and specific versions had "dot numbers" such as J2EE 1.4. The "2" is dropped from the name, as well as the dot number. So the next version of the Java platform for the enterprise is Java Platform, Enterprise Edition 5 (Java EE 5).

Personally, I like Java EE. I dislike the idea of introducing quantifiable numeric into the name of a product. I just felt that the existence of numbers in any name will causes some degree of loss of abstraction, creativity and makes people speak like as if they are really a nerd (Which in fact they might not). The name of J2EE is even worst as the number 2 is located in the middle, not as prefix or suffix of the name.

No offence to people who named J2EE. Just my 2 cents. :p

Java Technology Road Map

Just in case you ever wonder how BIG the planet of Java is, here is a brief Java's world map. Can you differentiate the "North" or "South" of the world?

Java Technology Road Map

The Advantages of the Java EE 5 Platform

A Conversation with Distinguished Engineer Bill Shannon

Version 5 of the Java Platform, Enterprise Edition (Java EE, formerly referred to as J2EE), has arrived. Its streamlined features offer added convenience, improved performance, and reduced development time, all of which enable developers to bring products to market faster.

To get an update on the Java EE 5 platform, we met with Java EE specification lead Bill Shannon, a Distinguished Engineer at Sun Microsystems. Shannon has been with Sun since 1982 and previously worked on the JavaMail API, the HotJava Views product, the Common Desktop Environment (CDE), the Solaris Operating Environment, and all versions of SunOS. He graduated from Case Western Reserve University with an MS in Computer Engineering.

Read More

Many of the highly anticipated features and technologies crawled into J2EE 5.0. Among them are the new EJB 3.0 and Java Persistence API, JSF, simplified component and service development. Evolving from the J2EE 1.4 orientation on Web Service for J2EE (JSR109) and related JSRs, J2SE 5.0 annotation definitely makes a difference on the ease of creation of services. If you still being scared away by the intricacy of SOAP,WSDL,UDDI,End Point configurations and etc etc, well, check out how Bill Shannon answered the following question:

Question: If you could speak to an audience of 1000 talented developers who were on the fence, and considering moving to Java EE 5, what would you say to them?

This is not your father's J2EE!

Summary of New Features in JavaServer Faces 1.2 Technology

  • Alignment with JSP technology

  • Ease-of-use improvements in support for custom messages

  • Improved state-saving behavior

  • Ability to turn off generation of component client IDs

  • New setPropertyActionListener Tag

Finally, JSF 1.2 specification decided to deprecate the use of JSF EL in favour of JSP EL. This move is good because it promotes a more reusable syntax and development semantic to JSP/JSF developers and makes JSF more tightly integrate to J2EE platform.

The exclusive support of JSP JSTL forEach action in iterative manipulation of JSF components eliminate the need of using JSF dataGrid (Which is one of the workaround for prior JSF version) to contain dynamic iterative generation of JSF components. Also, it eliminates the use of f:verbatim in correctly rendering interleaved HTML and JSF contents.

As usual, the use of IFrame, HTML frameset or multiple browser windows causes technical issues to the server side frameworks that manipulate states using certain id such as session id, view id and others. JSF 1.2 rectified this problem in the framework.

By following the new specification, I desperately waiting for the next line of JSF IDE where the promises of better JSF nirvana can be delivered.

Read more by following the link below.

Sun Java: JSF 1.2

SingleThreadModel behavior in Websphere

As SingleThreadModel interface is already deprecated in Servlet 2.4 specification, it is still a need sometime to utilize this interface. Of course, good programming practice would avoid the use of instance members and class members in the servlet (or whatsoever class that supports concurrent multithread invocations). I have a situation where the API is provided by third party vendors and the API seems don't behave properly in multithreaded environment.

Referring to the Servlet specification, there are 2 ways that the container might handles the servlet that implements SingleThreadModel. First, the container create only one instance to serve all requests. All requests to the container will be handled in sequence. Second, a number of instances will be created and place into the pool. Each request will be assigned a servlet instance. It is not guaranteed that each request will be assigned a new instance. Instances in the pool will be reused.

Websphere Application Server handles SingleThreadModel by following the second approach. Although the hardware requirements increased to support this requirement, it is better than the first approach where user response time (waiting for their turn) is unacceptable.

IBM: Websphere Best Practices

Struts Answer to JSF

As JSR127 JSF fostered its position as Java standard Web UI framework in J2EE 5.0, another well known and commonly MVC framework, Struts responded by initiating a more generalized framework that allows the incorporation of JSF into it, namely Struts Shale project.

Apache: Struts Shale