Khanderao on Emerging And Integration Technologies

Friday, March 30, 2007

A mashups time? Welcome to Enterprise Mashup world

With the emergence of Web2.0 in general and explosion using Google Maps, mashup became a buzzword in web community. A mashup is a web or serverside application that combines information from various sources so that it can me enriched and presented in a meaningful way. In last couple years developers developed almost two thousand mashups for different application space like maps, real estates, video, sports, news, shopping, photo using APIs provide by well known companies like Google, Yahoo, ebay, Amazon, etc. Emergence of Ajax also helped innovations to emerge with fancy applications. Mashups are now finding place in commercial applications too. A recent example is Salesforce announcing a mashup support in analytical dashboards in upcoming W07 release.



Depending upon where mashups take place, one can classify them as client or server side mashups. Oracle’s WebCenter is in the first category based on portal standards while QEDWiki from IBM is in later.



Many free as well as commercial and online as well as development tools are coming up so that mashups can be easily built. Some of them are listed here:

Above All

RSSBus

Procession

RachetSoft

JackBe

datamashups

Dapper

Rex from Nexaweb

QEDWiki from IBM



Many startups also rushing in to cash on this coming up with their products based on mashups. We should see more innovation and hits&miss in this field.


Anyway, if you come across good mashup tools let me know.

Labels:

Add to Technorati Favorites

Save This Page on del.icio.us

Thursday, March 29, 2007

A free Guice from Google causing you a Guice vs Spring dilemma?


Whenever Google announces something it makes big news. No wonder when Google released its internal Java dependency injection (DI) framework called Guice (pronounced as Juice) to open source, it created a splash in developer community.

Dependency Injection is a pattern to separate an application class from the containers implementation. Another term commonly used for DI is InversionOfControl (IoC). However, IoC is a more generic term with varying interpretation depending upon what control is being inverted. Hence, more precise term being used here is a Dependency Injection. In case of DI, the framework’s container or assembler assumes the responsibility of delivering the required objects or their finder interfaces via constructor or methods. . Such decoupling makes it independent to container. Thus offering a flexible and movable implementation. Moreover, DI based implementations are very good for testing. One may provide mock up implementations and hence test objects which otherwise would be unreachable. A very popular framework Called Spring is a sterling example of such DI / IoC frameworks (http://www.springframework.org). Prior to the rise of DI based frameworks, Service Locator Pattern was commonly used () for locating required components. JNDI lookup is a classic example of this usage pattern.

So Guice is a DI framework. Guice injects constructors, fields and methods. It has some advanced features such as custom scopes, circular dependencies, and static member injection. Is this a path breaking and new disruptive technology? Off-course Not. Currently there already many frameworks like Pico, Nano, Avalon, Gravity, Spice, Jice, Yan and many others and most importantly a very popular Spring! You can find an exhaustive list of IoC based frameworks at http://java-source.net/open-source/containers

Guice developers published a comparison of Guice with Spring at http://code.google.com/p/google-guice/wiki/SpringComparison One of the key difference is how is the dependency defined/bound. Guice is uses Java annotations while Spring has on XML based beans registry.

A side effect of this dependency on Java annotations is you cannot use Guice with JDK versions earlier to JDK1.5 (J2SE 5) because Java Annotation (http://java.sun.com/j2se/1.5.0/docs/guide/language/annotations.html) was introduced in JDk1.5 (J2SE 5).

Annotations are defined in a source code. Thus they introduce a design time dependency. You need to change the code whenever there is a change in configuration. Guice overcome this issue with properties for externalization. You can still add an implementation class in runtime from properties. The key is Guice does not enforce the externalization. So you have a choice.

A comparison paper from Guice Developers about Spring and Guice comments following about configurability and performance. "Spring supports two polar configuration styles: explicit configuration and auto-wiring. Explicit configuration is verbose but maintainable. Auto-wiring is concise but slow and not well suited to non-trivial applications. If you have 100s of developers and 100s of thousands of lines of code, auto-wiring isn't an option. Guice uses annotations to support a best-of-both-worlds approach which is concise but explicit (and maintainable)." We need to read these comment with a grain of salt. Performance impact depends when, where and how many objects being created and passed on. Are “objects being injected” created only during a startup or initializations or in every session? If the objects being created were not high in volume (or created only during startups or initializations) then the performance advantage of Guice would fade away.

Guice provides a faster, simpler and leaner framework for DI. However, Spring has much richer in functionalities in addition to DI. For example Spring XML can do transaction management, aspects, RMI exporting. With much richer functionality, Spring jars are comparatively larger in size than Guice. Guice promoters argue against Spring as it is being “bloated”. But what is a roadmap of Guice? It is always going to be a stripped down DI only framework? Or it will start building Spring like functionalities on top of DI? If Guice too adds more ingredients (read features), it may become equally “bloated”. Guice developers assert that performance is their primary driver. I hope that they maintain this commitment.

So, if you are looking for a DI only functionality. You can give a try to Guice. Google developers say that they are already using it in “mission critical” applications. With Google’s commitment you can have some assurance about a stability and continuity of this framework than many other smaller open source. However, in near future, Spring is still a popular choice for complex application developments. A more educated decision can be based on a feature-to-feature comparison for a type of application that you are building and what additional functionalities that you may need. I thing you know that “taste” is relative. So this Guice may or may not be tasty for all people and every time.

References:

Free Guice from Google

http://code.google.com/p/google-guice/

User Guide

http://docs.google.com/Doc?id=dd2fhx4z_5df5hw8

java Docs

http://google-guice.googlecode.com/svn/trunk/javadoc/index.html

IoC containers

http://java-source.net/open-source/containers

Spring and Guice comparison:

http://code.google.com/p/google-guice/wiki/SpringComparison

Using Spring in Guice: http://google-guice.googlecode.com/svn/trunk/javadoc/com/google/inject/spring/SpringIntegration.html

Service Locator Pattern

http://java.sun.com/blueprints/corej2eepatterns/Patterns/ServiceLocator.html

Dependency Injection using Spring

http://www.springframework.org/docs/reference/beans.html






Labels: , , ,

Add to Technorati Favorites

Save This Page on del.icio.us

Monday, March 26, 2007

Exposing Peoplesoft's Component Interface(CI) as Webservice

PeopleSoft’s Component Interface (CI) is a common way to abstract Peoplesoft’s Component by encapsulating data and implementation. Typically a CI exposes component properties and provides system defined and user defined methods. Peopletools IDE provides a nice framework to develop CIs. CIs can be used in App Message, any Peoplecode and App Engines. CIs are heavy weight objects carrying artifacts like validations. Though a more lightweight object model called Application Message (popularly called as AppMessage) is now-a-days more popular among Peoplecode developers, many of the legacy applications are still exposed as CIs. Moreover, it is easier to expose a Component as a CI.

Thus, exposing CIs as WebServices provides a quick way to expose existing PeopleCode based applications to SOA world. Though versions earlier to Peopletools 8.48 supported a basic mechanism to do the same, Peopletools 8.48 provides much easier and richer framework. In Pre-8.48 world, CI to WebService was via SOAPTOCI and exposed one method as a webservice.




Service Designer in Peopletools 8.48 is a web-based designer to discover, create, publish/consume, and monitor services. One can use the designer to expose CI as a webservice with select a CI by navigating to Integration Broker->Web Services ->CI-Based Service. Providing a webservice to an existing CI is as simple as selecting operations to be exposed and generating a service.

However, a WSDL for such service would have system generated names. These system-generated names may not be very intuitive. To overcome this issue, the designer provides a provision to give alias name for the service and operation names. These aliases are used in the generated WSDL as Service Name and Operation names. Similarly, one can use a message editor or routing parameters to provide some meaningful name like “CURRENCY_RECORD” instead of system generated names, which are typically like M7869912.V1. If you want to expose a different message shape then one can define a transformation in routing metadata for the operations.



If you consider CI as a proprietary Service Interface, then, Providing CI as a webservice is a classic example of Web Service to proprietary Service. It provides a nice abstraction.

Labels: , ,

Add to Technorati Favorites

Save This Page on del.icio.us

Wednesday, March 21, 2007

New license growth of Oracle Fusion Middleware... from Quaterly report...

Larry Ellison reported in a quaterly report (March 20th 2007)

“Our middleware new license sales grew 82% in the third quarter and 62% over the last
twelve months,” said CEO, Larry Ellison. “This compares to BEA’s growth rate of 8% in their most recently reported quarter and 12% over their last year. Not only are we growing faster than BEA, we’re now larger than they are in the middleware business.”

http://www.oracle.com/corporate/investor_relations/earnings/3q07-pressrelease-march.pdf

Labels:

Add to Technorati Favorites

Save This Page on del.icio.us

Friday, March 16, 2007

BPEL processes in PeopleSoft Applications using Peopletools 8.48 (and later)

In last few days, many customers, professional service engineers and integration specialists asked about using BPEL with PeopleSoft Applications. So I am giving some high level overview of Peopletools 8.48 functionalities for integrating with BPEL.

In 2005, when I was the Enterprise Architect of Peopletools (a development and runtime platform of Peoplesoft Applications), one of the important projects we did was to add many features to enable SOA in a better way. Peopletools 8.48 provided a better Service designer to consume or provide webservices. Many new features like WS-Addressing support, WS-SE, etc aligned Peopletools 8.48 to seamlessly work with BPEL. A much better than earlier version of Peopletools 8.46 and 8.47! (both these version are also certified with BPEL 10.1.2).

Many additional features like a support to WS-A headers & correlation id, security credentials, WSIL lookup, and others, allow PeopleSoft Applications and BPEL to discover each other’s services and invoke them other synchronously or asynchronously in a secured way.

We also added some features specifically for BPEL e.g. Partnerlinks in generated WSDLs, a utility Application classes, BPELUtil, IBUtil etc. With partnerlink support, the PeopleSoft Services are easier for BPEL to consume.

Using newly introduced BPELUtil, Peoplecode developers can directly launch a BPEL process. IBUtil class provides APIs to track the process. It also provides some more utility methods to get BPEL console URL, domain, etc. In fact, Peoplesoft CRM developed a process monitor for CRM processes using these APIs in release 9.0. Additionally, BPEL console can be launched within Peoplesoft portal without re-login.

Consuming a BPEL process was made very easy in Peopletools. One can consume the service by discovering from BPEL’s WSIL, UDDI, or directly importing using wsdl URL, or from a WSDL file. A copy of the WSDL is stored in WSDL repository table. Using the service designer, the developer can select port type, operations and messages. The developer can also add some handlers and routings.







Easy Coding: The consumed BPEL process can be launched (invoked) using a very few Peoplecode statements.

Creating a Message

&payload = CreateXmlDoc(&customer);

&msg = CreateMessage(Operation.PROCESS, %IntBroker_Request);

&msg.SetXmlDoc(&payload);

Invoking a BPEL process

&response = &bpelProcess.LaunchSyncBPELProcess(&OPERATION, &msg, "", "");

Processing a response

If All(&response) Then

&responseString = &response.GenXMLString();

WinMessage(&responseString);

Else

WinMessage("Error: No reply ");

Providing Services:

Peoplesoft Component Interfaces (CI)s and Peoplecode App Classes can be easily exposed as web services. The Service Designer helps to assemble the service by defining service operations and map them one or more CIs or App Classes or Peoplecode functions (handlers). Using the Schema designer one can design Peoplsoft’s rowset or non-rowset based message or import pure XML message as a schema. The service designer abstracts the internal names of handlers and messages allowing developers to give appropriate names. After assembling a service, the developer can publish the service as a WSDL to a WSIL or UDDI repository. The WSDL is also saved in an internal WSDL repository and available for query or export.

Routing and transformations:

Many times the external service may expect or send a message, which could be different than the internal message. The Service designer allows developing and using transformations of these messages using a graphical mapper.




Peopletools supports following Request Response Patterns:

  1. One way notification (Fire and Forget)
  2. Synchronous Request Response
  3. Asynchronous Request Response

The Async Request response is supported using WS-Addressing headers. The external WS-Addressing headers, like correlation message id, propagated to Peoplecode based application (via IBInfo).

Security: In the world of integrations, Security is very important. Different framework handles security differently. PT8.48 implements a waterfall security model. A Username token and pass code token can be set at Service operation or Node or may have a default. It can be sent encrypted, digitally signed or text. An application programmer can override these using Security override API to provide a different token. This security credentials are sent via WS-SE.

Using the Integration framework, Enterprise Applications like CRM, HCM developed many Business Processes with BPEL.




(Peoplesoft is a Trademark of Oracle Corp. To differentiate with other ERP applications offered by Oracle, I am specifically using Peoplesoft name.)

Labels: , , , ,

Add to Technorati Favorites

Save This Page on del.icio.us

Sunday, March 11, 2007

Service Data Object (SDO): with rich DAO/DTO featureset, a standardization efforts, and support from J2EE AS vendors, ready to take off?

Service Data Object (SDO) specification provides a uniform access to heterogeneous data sources like XML, database, web services etc. Even though there are already plenty access mechanisms and specifications like JDBC, JAXB, JDO, ADO, Entity EJBs, etc, SDO still stands out due to some useful features.

SDO specifications provides:

  • Uniform access APIs to heterogeneous data sources
  • Multi-language
  • Maintains a disconnected data graph.
  • A dynamic APIs
  • Generate static APIs from the data source’s schema or UML.
  • Xpath based navigation through data graph
  • Change summary
  • Validations and constraints
  • Metadata access which is useful for tool builders.
SDO comparision with other data programming technologies
(table source Ref . 2 Next Generation Data Programming http://www.osoa.org/download/attachments/287/Next-Gen-Data-Programming-Whitepaper.pdf?version=1)


Model

API

Data Source

Metadata

Query language

JDBC Rowset

Connected

Dynamic

Relational

Relational

SQL

JDBC cached

Rowset

Disconnected

Dynamic

Relational

Relational

SQL

Entity EJB

Connected

Static

Relational

Java introspection

EJBQL

JDO

Connected

Static

Relational +

Object

Java Introspection

JDOQL

JCA

Disconnected

Dynamic

Record Based

Undefined

Undefined

DOM & SAX

NA

Static

XML

XML Infoset

Xpath, XQuery

JAXB

NA

Static

XML

Java Introspection

NA

JAX-RPC

NA

Static

XML

Java Introspection

NA

SDO

Disconnected

Both

Any

SDO

Any



These rich features offer several benefits to different players:

For software architects and programmers it is useful to have a uniform representation of various data sources. As many other APIs a separation of data source specific APIs and business logic is highly desirable. With SDO, the interaction with data source is abstracted from application developers. Those who handle the persistent layer or provide a mediation framework would deal data sources.

A uniform access, metadata and dynamic APIs are very useful features for tools developers.

A disconnected data graph, a data change summary and optimistic concurrency would help application builders to build SOA oriented applications where disconnected client can manipulate data and then save to the data source.

Standardization of SDO:

In Nov 2003, JSR 235 was filed to standardize SDO in JCP. Unfortunately, due to some legal issues this JSR never made any progress.

However, in addition to BEA and IBM, many other firms like Oracle, SAP, etc joined the efforts to develop SCA and SDO specifications. As a result of this collaboration, Open Service Oriented Architecture (OSOA), a much mature version 2.0 was introduced in 2005. While this collaboration made a very good progress, the specification would not be standardized immediately. It would need to be submitted to some standardization body like OASIS, which would follow its own process to standardize. However, with the agreement of all the collaborating companies, the specification would serve as an intermediate but ‘de facto’ standard.

With Sun Microsystems joining the SDO and SCA efforts in July 2006, I hoped a revival of JSR-235. However, the SDO 2.0 was never submitted to JSR 235. Moreover, with multi-language support, the SDO 2.0 specification differed from SDO1.0.

Implementations of SDOs: In WAS6.0, IBM converted its WDO to SDO 1.0, while BEA added a SDO 1.0 based implementation in its Liquid Data. Some other vendors like Rouge Wave (HydraSDO) and Xcalia (XIC), SAP (NetWeaver J2EE 5 AS) introduced products supporting SDO1.0 while Oracle and others announced works based on OSOA’s latest specifications of SCA and SDO. Oracle added SDO support in its recently announced opensource Toplink.

Though SDO 1.0 introduced basic architecture and interfaces like DataObject, datagraph, it was incomplete due to a lack of specifications for Data mediation Services, a key architecture module, and other features. Thus it lost some portability across the implementations.

An Open source community is currently working on a project named Tuscany that would provide an implementation of SCA and SDO. Though there are high hopes about this effort, it still has a lot to be added.

Industry adoption of SDO:

For many reason’s SDO took a long time to gear a momentum. Most importantly it is a specifications, which did not get immediately standardized when it was introduced. Moreover, in the initial period, not all J2EE application vendors supported it. So the implementations based on the SDO were not portable across the J2EE application servers. Since the JSR-235 was stalled, the SDO lacked a wider visibility in Java community.

SDO: A plane ready to take off?

SDO version 2.0.1 is much more mature. It has added more languages like C++, PHP etc. There are plans to support C and Cobol too. However, there are some pieces in SDO architecture that are still scoped out of specifications. Most important scoped out feature is Data Access Service. If different vendors implement the DAS in making SDO importable on other application server, there would be roadblocks in its momentum. The success of SDO depends on its support on all J2EE Application servers.

SDO offer richer functionalities like change summary, dynamic APIs and metadata however these rich features should be implemented so that a performance of SDOs in access, updates and serializations is not affected.

As we already know, there are a lot of competing technologies and alternatives to access Data. Microsoft has ADO in its stack. WCF specific implementations may continue with the same. Many applications in Java spectrum may still prefer POJO,JDO, JDBC, JAXB for optimal direct access. However, features like disconnected model, change summary, multi-datasources would provide a sweet spot to SDO in the SOA world. SAgain, in that case, SDO need to have proper integration story working with current popular frameworks like Axis, WSIF, JAX-WS.


SDO may see a better support in SCA based solutions. Specifically some SCA implementation and ESBs may increase the adoption of SDOs. BTW, SCA itself does not strongly advocate SDO. One can have an implementation of SCA without SDO. However, within a SCA composite, one may find optimal use of SDOs as a DTOs transferring data on 'wires'.

Since applications leaders like Oracle and SAP are on board of SCA and SDO, if their applications also get aligned with these technologies, there would be a boost in arm.

Today, SCA and SDO have received a wide support from all major vendors of J2EE Application Server. If they deliver on their promises by adding SCA & SDO in their stacks, usage of SDO itself proves a "practical" advantage over other DTO/DAOs and SCA-SDO specifications get standardized, we would see a wide availability of products, tools and resources that would enable a large pool of developers and architects to develop solutions and products based on SDO. Finally, it seems that most of the stars are getting aligned! Hope for the best.

References:

[1] SDO2.0 specifications

http://osoa.org/display/Main/Service+Data+Objects+Specifications

[2] SDO whitepaper

http://www.osoa.org/download/attachments/287/Next-Gen-Data-Programming-Whitepaper.pdf?version=1

[3] SDO1.0 specifications:

http://xml.coverpages.org/IBM-BEA-SDOv10.doc






Labels: , , ,

Add to Technorati Favorites

Save This Page on del.icio.us

Monday, March 05, 2007

Mapping between XML & JSON: Need a standard way

We need a standard way of mapping between XML and JSON. Let me explain why,

Currently, we have two popular types of mapping between XML and JSON.
1. Badgerfish http://badgerfish.ning.com
2. Mapping
The main difference between these two conventions is about namespace mapping.

For an example,

<xsl:root xsl="http://mynamespace.com">
<detail>my details </detail>
</xsl:root>


In case of “Badgerfish” the above xml would be mapped as:
{"xsl:root":{"@xmlns":{"xsl":"http://mynamespace.com"},"detail":{"$":"my details"}}}
In case of “Mapped”
In case of Mapped, namespace is allowed to map to a name e.g. http://mynamespace get mapped to mynamespace.root
{"mynamespace.root":{"detail":"my details"}}


While Badgerfish implements the full XML infoset in JSON, if there are many namespaces like following (example from Badgerfish site)

<alice xmlns="http://some-namespace" xmlns:charlie="http://some-other-namespace">
<bob>david</bob>
<charlie:edgar></charlie:edgar> <
/alice>
it becomes
{ "alice" : { "bob" : { "$" : "david" , "@xmlns" : {"charlie" : "http:\/\/some-other-namespace" , "$" : "http:\/\/some-namespace"} } , "charlie:edgar" : { "$" : "frank" , "@xmlns" : {"charlie":"http:\/\/some-other-namespace", "$" : "http:\/\/some-namespace"} }, "@xmlns" : { "charlie" : "http:\/\/some-other-namespace", "$" : "http:\/\/some-namespace"} } }
Which, in my opinion, seems to more cluttered than xml string.

In summary, due to two different ways of mapping, we need two parsers and builders for XMLß->JSON. This is very inconvenient. As a result we need a superset parser. Or we need a better and standardized mapping.


Labels:

Add to Technorati Favorites

Save This Page on del.icio.us

Thursday, March 01, 2007

ESB vendors’s response to SCA


Most of the key ESB vendors are already participating in finalizing SCA 1.0 specifications. The list contains on Oracle, IBM, BEA, Tibco, Sun, Iona, Progress (Sonic), Jboss (Redhat), CapeClear, and more. (You may find a latest list at http://www.osoa.org/display/Main/Service+Component+Architecture+Partners)

Most of them see a value in SCA as a promise to have standard for a simplified and powerful Component model to assemble SOA based applications/integrations. They understand a need of a common framework to connecte and assemble SOA components together so as to deploy on various middleware including their ESBs.

(http://www.osoa.org/display/Main/Partner+Motivations)

The commitments from different participants may vary significantly. Some may participate to provide inputs and be part of standardizations with two fold focus: one to influence the direction and another to align their products. Some of them may start implementations based on SCA specifications. The early participants like IBM, BEA and Oracle would offer solutions based on SCA earlier than others. Apache open source’s Servicemix allows deploying SCA composite in its container. Other vendors may work on aligning their products to support SCA. However, some vendors may play wait and watch to see a market adaption and provide a choice when customers demand.

Irrespective to their commitment, they may support SCA differently.

There are three broader possibilities.

  1. Build a platform based of SCA foundation: A compatible to SCA in a way SCA as a first class citizen. Such platform can be a monolithic or a modular with different engines for different component types like BPEL, Java, etc. Such implementation would directly use SCA assembly metadata.
  2. Deploy SCA on their container: support deployment of SCA composite in their container. In this model, as a part of deployment, a container would consume SCA composite but may transform to its own runtime metadata.
  3. Integrate: Integrate with SCA based container but the SCA based container is outside the core platform.

Many of ESB vendors took one of these approaches (or a last one “do nothing”) for JBI. For example, ServiceMix is based on JBI while Oracle ESB 10.1.3 provides a JBI container to integrate with ESB.

Here, I am giving a reference to JBI as an example of ESB world’s response to a standard specification. JBI and SCA have some overlapping and some complementary functionality. JBI and SCA is another subject to discuss in detail. Since we are on the subject, I would make a passing note that JBI provides a framework and runtime specifications mainly for integration architects and middleware runtime platform (SPIs) so that runtime containers can be interconnected. SCA specification does not dictate a runtime implementation. One may enhance a JBI based runtime for SCA. There is a little overlap but a very good value addition.

In Summary, SCA is a very good specifications for ESBs to provide SOA platform. We may see some innovation from ESB vendors while implementing it.




Labels: , ,

Add to Technorati Favorites

Save This Page on del.icio.us