Implementing Basic Auth with RAD and Servlet 2.4

A colleague recently wrote a RESTful service and wanted to add a bit of security to it.  Here at Cat we use IBM's RAD environment and I personally prefer to use the editors to make changes to the web.xml and such.

I wrote up a big woop-de-do maybe a year ago on how to set up basic auth in Websphere 7 and thought it was different than Websphere 6.  However, the differences really have to do with the servlet spec.  The servlet spec is declared at the top of the web.xml like this:


Here we see a spec 2.4 web.xml.  These instructions will show how to use the RAD tooling to add basic auth to a web project.  We are going to assume that you have a web project written and just need to add Basic to it.  Also, we are assuming that you are going to hook the services up to LDAP groups.

1.  The first step is to open the wizard up and select the "Security" tab (tabs are at the bottom).  



The first item we need to add are some Security Roles.  Notice here I have 5 roles and they are named "blahblahRole"  This is a good naming convention and I suggest it.  We will use the first one as the example so we have "ServiceToolRole".  Make one of these for each service you want to secure.  This will create xml in our web.xml like this:



2.  Next you add some constraints.  These are the servlet mappings:  So you add a "blahblahConstraint" and add some resources, (get/post/url) and a role.


So here you add the HTTP verbs you want to constrain (all others will NOT be constrained) and the URL.  Once this is added, you need to tie it to the role you created in step #1.

3.  OK.  Now you have your constraints and roles defined.  The next thing we need to do is tell Websphere to use Basic Auth.  Proceed to the "Pages screen to set this up:


Here we select "BASIC" from the drop down and then add some instructions that will eventually appear in the browser login window:

The instructions "Realm" appear in the red box above. Each browser is different.  This is the only customization you can do to the logon screen, so don't waste your time trying...

4.  We are done with the web.xml.  Now we need to set up the EAR file.  Find the application.xml file in the ear and click on the "Security" tab.


There may be no Roles in your view.  If the one you want is not there, hit the "Gather" button at the bottom.  This goes into the web.xml and "gathers" all the roles and displays them.  Then you need to click "Users/Groups" and add the LDAP groups you want.  You can just add Users also (individual users).  Save and close and you should be done.

Using the wizard here is a must because the file ibm-application-bnd.xmi is also changed.  It contains the specific LDAP groups and logons.

That's it!  Now when you hit your RESTFul service, you should be met with a logon page.  Type in your credentials and Bam!  Data!  Note that if you put in incorrect credentials, your webapp will not even know it.  Websphere is intercepting these transactions and sending back errors before it passes anything to your app.

Let me know if this has been helpful!

Evolutionary Design...

Quite a remarkable thing occurred with the FPS project and I thought I would document it before I forgot the details.  The project started over a year ago and has since completed with universally acknowledged success.  We had an excellent team consisting of 1 project manager, 5 programmers, 1 x-programmer who knew the business logic through and through and most importantly we had engaged customers:  engineers to be exact.

There are several reasons why this project, in my opinion, was a success:
  1. Engaged Customers.  I list this first because it is the most important.  They new exactly what they wanted, had "skin" in the game, and were willing to meet and verify every change we made.
  2. Testing philosophy.  Our 80% + JUnit coverage helped us do radical refactoring (the main reason for this post) right up to the end of the project.  I was amazed at the extent of changes made each iteration.
  3. Agile project management (Scrum).  Our project manager was very focused and kept us focused.  
  4. Weekly code reviews with teeth.  We would go over every change (quickly) and record what needed fixing.  We would even follow up...
  5. Evolutionary Design...
The last one I would like to elaborate on...

When someone buys an engine from us it has a stated horsepower.  This horsepower can be increased with some additional iron added to the engine, and a modification to the ECU.  The modification to the ECU is done using passwords and such and we charge for these upgrades.  FPS is a money maker at Caterpillar, but we don't just let anyone do it.  We have the EPA and foreign countries to satisfy and we certainly don't want to just give stuff away, so there is plenty of security behind all of this.

We designed our object model based on the ECU.  The input is a bunch of information that allows us to recreate the ECU as a model object.  This ECU is then validated and billed and passwords are generated.

The ECU model started out as a Java Map of values that was sent throughout the application and was acted upon.  It did have some responsibilities such as data hiding and such, but was pretty anemic.  Early on I knew I wanted it to have responsibilities, but we were not sure what they would be or even what the nature of the ECU itself was.  Our project had 2 main phases and in the first phase our ECUData didn't need much validation.
Step One:  I can tell you if the input I have been given is valid, and I can create a password string...
 Phase One went out with a bang and everyone loved it.  Phase two began and we slowly got requirements and started coding.  I have always been one to not worry about performance.  Dan Long told me long ago, "Make it work, Make it pretty, Make it fast".  In that order, please and stop when people stop complaining.  There are lots of verifications we had to do to decide if and when to charge someone for a password.  We have to find out if the request is valid (can we upgrade the engine from 50hp to 5000hp?), if they have already been billed for this upgrade once (folks don't like paying twice), and find out who and the amount to charge.  Some of these decisions are very complex and take many database calls to do.  As a matter of fact, one time through our app can trigger over 200 database calls. (wow!)  One of the team member was quite adamate that we need to make each database call only once.  I however, didn't care how many times we called for duplicate values because it was very fast already and (according to Dan) we shouldn't worry about that yet (defer decisions).  The discussion got pretty heated several times and was starting to influence the design.  We discussed creating a "reference book" object that would hold all this stuff that could be toated around our app and referenced when needed (kinda like a cache).  We (Tim and I had a discussion by my desk about it) finally decided on a MetaData map in our ECUData object that would hold this stuff.
Step Two:  I can tell you if the input I have been given is valid, I can create a password string, and I can tell you some stuff about the data I have been given (metadata).
I thought this was a great idea.  We started stuffing data in there and bam, our code became simpler.  Work progressed and we discovered that everywhere we asked for some metadata, we would have to find out if it had already been fetched (if (metadata.contains(productId)) else get it).  This code was everywhere and was making our code base a bunch of "ifs".  Tim (again.  I wish I could think of these things) came up with the idea of "providers".  So we code up a provider and then we can just ask the ECUData object, "Hey, do you have the product ID for this serial number?"  The ECUData object would look in the metadata cache, and if it found it, return it.  If it didn't find it, it would look in the database (using the provider) and get it, stick it in the cache, and give it to you.  GENIUS!  It was as if my eyes had been opened!
Step Three:  I can tell you if the input I have been given is valid, I can create a password string, and I can tell you some stuff about the data (metadata) and go get it if I don't know it already.
Finally, I don't have an anemic data model!  The amount of code this eliminated was astronomical.  We looked for example, for serial number productIds everywhere (some things 3 different places) and now all that code is in one place and we don't care if we have it already or we need to get it for the first time.  Since we had a great set of JUnits, this code change was trivial (I believe it took about 4 hours).  Me and another programmer did all the work in an afternoon and there was much rejoicing!  We also spread the philosophy of providers to the rest of the team and everyone jumped in.  We now have 12 providers and love them.

Since we now have a smart object, we looked for other ways to make it even smarter.  There were an abundance of comparisons made to the ECUData object and we refactored those into Wrappers that we inserted at create time.  This eliminated another bunch of code.
Step Four:  I can tell you if the input I have been given is valid, I can create a password string, I can tell you some stuff about the data (metadata) and go get it if I don't know it already, and I can compare myself to other things easily.
All of the significant design changes to our ECUData object came in the second half of the overall project, required significant code changes, did not take much time to implement because of extensive JUnits and customer functional testing, and because we had engaged programmers, they jumped in to help..  In his article Is Design Dead? Martin Fowler calls these items "Enabling Practices".  I believe this is the first (and hopefully not last) time in my career when all the stars aligned!

When the project began I was given a document prepared by an engineer that contained a flow chart and tens of pages of documentation on how to Decode the ECU parameters.  This document was changed several times by the customer and I implemented it pretty much as it was given to me.  The JUnits on this thing were exhaustive and we have had no trouble with it so far.  However, there are no Model objects involved;  it is entirely procedural coding. I have often thought of how I would create a Model  to implement it.  I think it could be very cool and flexible (currently it is not flexible.  It does one thing, does it well, and nothing else).  About 2/3rds the way through writing it I was getting brain cramps because of how procedural it was becoming, but there was no turning back.  I believe it is an illustration of the design being done up front.  When this decoder was done, we discovered that it could not create an alternate ECUData object (we have Standard and Legacy).  We had to code, from scratch, a different decoder to handle the other one.  I believe that had we created a Model for the decoder, we could have one decoder to create both types of data.

All things cannot be designed using Evolutionary Design.  I built a deck this year on my house.  I used Evolutionary design (really!).  I had many problems.  I was always fixing/correcting things I did in the previous phase.  It turned out OK, but could have been much better had I known what I was doing up front.  I believe software is different:
If you can easily change your decisions, this means it's less important to get them right.  (Fowler)
We have a unique opportunity with software.  If the stars align!

Bean Templates and Inner Beans in Spring

Based on Mark's "Hibernate Transactions - Part 1 - How to Wrap..." post and comments, I'm posting a couple of examples of how to use template and inner beans to more easily manage bean definitions.

Abstract Template Beans
Template beans can be extremely useful for defining common configuration settings that you might want to be able to reuse across multiple bean definitions.

An abstract bean definition allows you to specify a "template" that you can use for any beans that you want to apply a common set of configuration settings. In the case of transaction-managed services, if your services use consistent naming patterns (save*, delete*, get*) you can reuse that template across all your services that use the same naming pattern.

So, instead of defining the same transaction manager attributes over and over again on each and every one of your services you can just define an abstract template and then define that template as a parent for each bean definition that you want to wrap with a transaction manager. Makes for more concise bean definitions if you have a lot of services that need to use a transaction manager...

The following example shows how you can define a common transaction manager template for your services that you want to wrap with a transaction:

<!-- This defines a default template for a transaction managed service.
Note the use of the absract attribute which allows us to extend this
definition for our service bean definitions without actually instantiating
a bean that represents this "template". -->
<bean id="txManagerTemplate" abstract="true" depends-on="transactionManager"
class="org.springframework.transaction.interceptor.TransactionProxyFactoryBean">
<property name="transactionManager">
<ref bean="transactionManager"/>
</property>
<!-- Define the default patterns for methods that require transaction management. -->
<property name="transactionAttributes">
<props>
<prop key="save*">PROPAGATION_REQUIRED</prop>
<prop key="delete*">PROPAGATION_REQUIRED</prop>
<prop key="*">PROPAGATION_REQUIRED,readOnly</prop>
</props>
</property>
</bean>

The use of abstract bean templates aren't just for transaction-managed services either. You can use abstract bean templates for any bean definitions where you want to reuse a common set of configuration settings across multiple bean definitions. If you find yourself defining the same properties over and over again on many different bean definitions, using a bean template can be an easy way to simplify your bean configurations.

Using Inner Beans

Another suggestion mentioned in the "Hibernate Transactions - Part 1 - How to Wrap..." was the use of naming patterns in order to distinguish the actual service implementation from the service that you want to expose as a transaction-managed service.

Another way to define the services is to use an inner bean as the 'target' property for the TansactionFactoryProxyBean instead of a reference to a standalone bean. The benefit that you get from defining a bean in this manner is that if the bean is configured as an inner bean you can't directly reference it because it doesn't have a bean ID. This can help avoid confusion over which bean you're supposed to interact with because now the only bean that's actually available is the one that you've wrapped with the transaction manager.

Below is an example where the actual service implementation is wrapped as an inner bean. This allows only the actual service that you have wrapped with the transaction manager to be exposed as a service bean and prevents someone from inadvertently using the service that doesn't use a transaction manager.

<!-- Define the service but wrap it using the transaction manager template so
operations can be performed transparently with or without a transaction.

Note how this bean definition reuses the common transaction manager template
via the parent attribute on the bean definition. -->
<bean id="ImportKeyDataService" parent="txManagerTemplate">
<property name="target">
<!-- Define as an inner bean here so that we can't directly reference the implementation.
Using an inner bean avoids any confusion about which service bean should actually be used. -->
<bean class="cat.dds.fpsdma.services.ImportKeyDataService">
<constructor-arg index="0" ref="fpsDAO"/>
</bean>
</property>
<!-- You can override the default transaction attributes if necessary.
However, this isn't required if your service uses the same naming
pattern as defined in your parent bean definition. -->
<property name="transactionAttributes">
<props>
<prop key="save*">PROPAGATION_REQUIRED</prop>
<prop key="write*">PROPAGATION_REQUIRED</prop>
<prop key="delete*">PROPAGATION_REQUIRED</prop>
<prop key="*">PROPAGATION_REQUIRED,readOnly</prop>
</props>
</property>
</bean>

Hibernate Transactions - Part 1 - How to Wrap...

Just had a call from someone wanting to know how to use Hibernate to delete some data from a table. Hibernate is extremely powerful when transactions are used correctly. First of all, lets start with where to "wrap" the transactions...

Most J2EE architecture is built like this (or should be!):

  1. Actions (Struts)
  2. Services
  3. DAOs (Data access objects)
One minor note is that Actions should NEVER have DAOs in them.  No reason.  Never.  The question you should have in the coding of any action is this.  "If I want to do this in the action, will I need to recode if I want to add a REST service to do this?"  Let me explain.  If you put ALL business logic and database calls in the Service layer, it is nothing to put a REST service to access it.  If you put DAO calls (or any business logic) in the Actions, you will have to cut and paste that code into a service to put a REST service on the same logic.  The following is brilliant service code!

public List getAllActiveLegacyFeatures() throws Exception {
  List features = fpsDao.grabAllActiveLegacyFeatures();
  return features;
 }

Moving on.  So you really only have 2 choices.  Wrap the transactions at the DAO layer or the Service layer. Lets begin by showing you how to "Wrap" a transaction.  Here is some code that does it:

<bean id="ImportKeyDataServiceImplementation" class="cat.dds.fpsdma.services.ImportKeyDataService">
  <constructor-arg index="0" ref="fpsDAO" />
 </bean>
 <bean id="ImportKeyDataService" class="org.springframework.transaction.interceptor.TransactionProxyFactoryBean">
  <property name="transactionManager">
   <ref bean="theTransactionManager" />
  </property>
  <property name="target">
   <ref local="ImportKeyDataServiceImplementation" />
  </property>
  <property name="transactionAttributes">
   <props>
    <prop key="*">PROPAGATION_REQUIRED</prop>
   </props>
  </property>
 </bean>

The first "bean" here is the actual object. Notice that on line 2 it has a DAO. Now this object should never be used, so we actually call it an "...Implementation". Notice that the second bean is named "ImportKeyDataService" It is not an implementation, but the object that we should use. (This is just how we did it on FPS. We just used this naming convention so we new that we should never use the "Implementation" objects). Notice that the Implementation object is referenced on line 9. This ties all this together. So, when you actually use the object "ImportKeyDataService" you will get a Wrapped transaction.

What does this give you? Lots. When any DAO call is made in the ImportKeyDataService object above, Hibernate will not commit anything. Batch updates and such automatically roll back correctly if there is an error thrown out of the ImportKeyDataService object. This would not happen if you wrapped the DAO. Every time you go in and back out of an object (the object looses scope) you commit. So, if you have a DAO doing something, it would commit every time you made a call.

Now there are two times when you want the DAO to be wrapped.

  1. If you are doing DB2 calls to CDID or some alien database, and you are just doing "selects", it is best to commit after every transaction.  For some reason Mainframers like that.
  2. JUnits usually require commits to work properly.  We make a junit for EVERY database call.  They just don't work if you don't commit after each one.  However, in this case, Spring makes this easy.
The most powerful thing about Wrapping transactions, though, is the "Lazy" fetching.  I will talk about that in the next post...

Find a Duplicate File...

We had a problem with duplicate XML files showing up to be loaded in an app.  After a quick search on the web, the suggestion was to do Checksums. I came up with this:

private boolean isDuplicateFile(String fileName, String path, String backupPath) throws IOException, FileNotFoundException {

  File theFile = new File(MEDUtils.addToFilePath(path, fileName));
  if (!theFile.exists()) {
   throw new FileNotFoundException(MEDUtils.addToFilePath(path, fileName));
  }
  long fileChecksum = FileUtils.checksumCRC32(theFile);

  File[] files = new File(backupPath).listFiles();
  long[] checksums = new long[files.length];
  for (int i = 0; i < files.length; i++) {
   long checksum = FileUtils.checksumCRC32(files[i]);
   new MEDInfoEvent(this, "checkForDuplicateFiles()", "File name: " + files[i].getName() + " checksum: [" + checksum + "]");
   checksums[i] = checksum;
  }
  return ArrayUtils.contains(checksums, fileChecksum);
 }

Cool SQL...

Here are some cool SQL tricks that I have learned...

Recently I had to order some some data but be case insensitive. I did it in Java using a Comparable, but this sql can do it also:
select *
from field_units
ORDER BY Lower(unit_metric_name)
This can find duplicate rows in a table:
select cust_id, site, count(*)
from alliance_customer_sites
group by cust_id, site
having count(*) > 1
This will, in Oracle, find all the tables in a schema:
select table_name, num_rows counter
from dba_tables
where owner = 'SCHEMA_NAME'
order by table_name;
This does the same in DB2 land:
select (SUBSTR(TBNAME, 1,30)) as Table, COLNO, (SUBSTR(NAME, 1,30)) as Column, COLTYPE, LENGTH, SCALE, DEFAULT, NULLS
from SysIBM.SysColumns
WHERE TBCreator = 'Z1Z10001$'
Here is a basic loop in Oracle:
BEGIN
--must have projects table loaded first...
--obviously you can join tables from different databases. Cool!
DELETE FROM project_commodities;
DECLARE
CURSOR c1
IS
select pc.proj_id, pc.type, ct.commodity_id
from project_commodities@z1sh pc, Commodity_types ct
where pc.TYPE = ct.COMMODITY_TYPE
order by proj_id ;
BEGIN
FOR x IN c1
LOOP
INSERT INTO project_commodities VALUES (x.proj_id, x.type, x.commodity_id) ;
END LOOP;
COMMIT;
END;
END;
This will find a stored procedure in Oracle:
this will find a stored proc.
select dbms_metadata.get_ddl('PROCEDURE','Stored_proc_name') FROM DUAL;
In DB2 land users need to add "WITH UR" to the end of all Select statements. In Hibernate, this is quite difficult, until you understand the power of the HibernateInterceptor.
  1. Extend  org.hibernate.EmptyInterceptor.
  2. Override the onPrepareStatement() method
  3. This method has a String argument which is what Hibernate thinks the final SQL should look like.  Just append " with ur" to the end of it and go.
  4. Remember, ALL sql goes through this, so what I have done in the past is add some "if" logic and look for the word Insert, update, delete etc (StringUtils.containsIgnoreCase(arg0, "insert")).  Don't add "with ur" to this.
 Another tip in DB2 land.  Strings and Chars are a pain as are Doubles and BigDecimal stuff.  Extend org.hibernate.dialect.DB2390Dialect and in the constructor you can "register" types:
public DB2DialectExtension() {
        super();
        registerHibernateType(Types.CHAR, Hibernate.STRING.getName());
        registerHibernateType(Types.DECIMAL, Hibernate.DOUBLE.getName());
    }
Doing this will keep the model beans nice and simple.  And nobody uses Char's anymore anyway...
These last two are hooked up this way:


 
 
 
  
 
 
  
   cat.dds.med.utils.DB2DialectExtension
   false
   none
   org.hibernate.cache.EhCacheProvider
   true
   true
   jndi/hibernate/medSessionFactory
   true
  
 

  1. Line 1 is defining the Interceptor.
  2. Line 5-7 are hooking it up.
  3. Line 10 is using the new Dialect.
Sweet!

Visualizing HypersonicDB data in JUnit

We all know that HypersonicDB has a GUI client. But, I could never figure out how to use that client in a JUnit running Hypersonic as a memory-only DB. Until now...


public void startHSQLGUI() {
String[] strings = new String[]{"-driver", "org.hsqldb.jdbcDriver", "-url", "jdbc:hsqldb:mem:appTempl", "-user", "sa", "-password", ""};
DatabaseManagerSwing.main(strings);
}


You need to change the -user and the -password options as appropriate.

Now, put a breakpoint somewhere in your code and fire that method when the breakpoint gets hit (in RAD, I open the "Display" view and execute startHSQLGUI()).

One problem tho...as with most fat client GUIs, closing the GUI apparently issues a System.exit() and will kill the JVM...