Monday, September 17, 2007

Making Sense of Fusion Middleware

The Fusion Middleware area has become more and more crowded with all of the products that are now included.

I speak from personal experience on this topic since I'm often stumbling through the various acronyms used for these products.

Most of us think that Fusion Middleware refers to Service-Oriented Architecture, Identity Management and Business Process Management since these are usually the emphasis of Oracle's presentations. In actuality there's an amazing breadth of other solutions when you dig into Oracle's offering.

To add to the complexity, some of Fusion Middleware products are further grouped into Suites for marketing purposes. That's good because the Suites are targeted to specific business needs. If you are looking for Business Process Management it's nice to find that term. The downside is that it can be confusing when the same product can be found in multiple Suites.

For example, compare the Event Driven Architecture Suite and the Service-Oriented Architecture Suite components. You will find that they both include Business Activity Monitoring, the Enterprise Service Bus, Enterprise Messaging (Java Messaging), and Business Rules. Then they add one or more other different Fusion Middleware products.

So, if we really want to understand the Fusion Middleware products it's best to break them down individually and understand their general capabilities. I've done that in a PDF file available here. Once you see it, you'll understand why I didn't just publish it here.

For those who are too busy/lazy to peruse the PDF file, here's a brief overview of the various components of the larger product groups/suites in the Fusion Middleware and what they currently include.

  • The Application Server is the core component of Fusion Middleware. It provides J2EE 1.4 support, Web Services Support, Enhanced Messaging, Transaction Management, Security Features, Clustering, Grid Computing, and Monitoring & Management.
  • The Business Integration Suite includes products like Master Data Management, XML Gateway, B2B Integration and the BPEL Server.
  • The Business Intelligence Suite includes Warehouse Builder, OBIEE, BI Publisher, BI Answers, BI Discoverer and the Hyperion products.
  • The Business Process Architecture Suite provides the ability to model, simulate and publish business process models.
  • The Business Process Management Suite is a superset of the Business Process Analysis Suite and the Service-Oriented Architecture Suite.
  • The Collaboration Suite includes groupware, unified messageing and real-time collaboration products.
  • The Content Management Suite supports content management, publishing, tracking and distribution with retention policy capability.
  • The Event Driven Architecture Suite is a marketing package that includes products that generate and report on events.
  • The Identity and Access Management Suite provides LDAP, Single-Sign On, Federation, and Virtual Directories.
  • The Middleware for SMB provides a lower-end solution that meets the needs of Small and Medium-sized busineses using the Standard Edition One of the application server.
  • The Middleware for Applications is a marketing package that includes middleware products for each of the application suites.
  • The Service Delivery Platform offers new communication options including Residential VoIP, virtual PBX and the Communication and Mobility Server.
  • The Service Oriented Architecture Suite includes those products used to define, orchestrate, monitor, secure and tune the execution of Web Services.
In my next post I want to talk about what's of interest in the Application Server 10g.

Friday, September 7, 2007

Options for Generating XML for BI Publisher

Oracle's BI Publisher is a powerful tool for producing richly formatted documents. It takes any well-formed XML data and refines it into custom invoices, checks, bar coded labels, web pages – the possibilities are limitless. The source for the XML data can be any process that can generate well-formed XML data ("well-formed" simply means that it complies with XML standards). In the Oracle Applications world, this typically means creating a duplicate of an existing report and then modifying its output to generate XML rather than text. However, there are many occasions where the report requirements don't correspond with an existing report and you'll have to look at other options to generate XML data.

BI Publisher Data Templates provide the overall best method for producing XML output that is to be used in conjunction with BI Publisher. Data Templates are themselves specially formatted XML documents that contain SQL code along with other processing instructions. They are relatively simple to construct requiring only a text editor and a working knowledge of the template elements and layout. Once complete, the Data Template is loaded into the Template Manager as part of the data definition. Everything is managed through BI Publisher thus eliminating the need to create a separate concurrent process. At runtime, execution is handled via the BI Publisher Java APIs and the entire process has been tuned to out-perform any other method for producing XML output. Keep in mind, however, that all the appropriate BI Publisher patches must be in place for Data Templates to be used and this is usually not the case.

The PL/SQL XDK (XML Developer's Kit) packages offer a powerful option for generating XML data. The XDK provides methods for creating and manipulating XML documents through the DOM or Document Object Model. The DOM methods provide a maximum amount of control when generating new documents or modifying existing ones. Unfortunately, the learning curve is somewhat intense and it takes a substantial amount of coding to perform even the simplest of tasks. The XDK probably is best suited for hardcore developers who need an advanced level of control over XML generation.

You may also choose to generate your XML data using your favorite scripting language. Since XML data is really just specially formatted text, any process that can write text output can produce XML documents for use with BI Publisher. Perl, PHP and even shell scripting can be designed to generate XML as long as you follow the guidelines set forth by the XML standards. Many of these common scripting languages also offer add-on libraries specifically designed to simplify the process of generating well-formed XML documents. Plus, if you're already familiar with any of these languages then the implementation should be reasonably straightforward. There is a downside to this method. Most likely, you'll need to access data stored in the Apps database and this can get a little tricky when the script is called from a concurrent process. You'll need to install the appropriate libraries to access the Oracle database or make system calls to SQL*Plus.

Another little known method that can be used to produce XML data in many situations is SQLX or SQL/XML. SQLX has different meanings depending on the context. Typically, SQLX refers to a set of technologies used to query information from an XML database or document. SQLX also refers a set of functions used to within a SQL query to produce XML output. SQLX functionality is available in Oracle 9i or 10g databases but the latter versions provide much more usability. SQLX scripts can be setup as SQL concurrent requests and immediately processed with BI Publisher. They can also be embedded into PL/SQL procedures so that the output is written to a file and then processed with BI Publisher or further manipulated. The SQLX functions are intuitive and with a little ingenuity and experimentation, you should be able to quickly produce XML documents to meet most BI Publisher requirements.

Posted on behalf of:
Tim Sharpe | Solution Architect

Monday, August 27, 2007

Discoverer and ORA-1483 Errors

This article is based on a friend’s Discoverer experiences. He had an issue with Discoverer 10.1.2 where every time a user tried to save a workbook to the database they got the following error: (unable to save a workbook to the database).
ORA-1483 INVALID LENGTH FOR DATE OR NUMBER
Upon further investigation it was noted that they were using 10.1.0.4 version of the database and that the database was installed onto a HP-UX(PA-RSIC) 64 bit server.

According to Oracle there is a known bug, number 3668164 entitled "SAVING A WORKBOOK USING DISCOVERER 10G GIVES - ORA-1483", within the 10.1.0.4 database that will prevent Discoverer from being able to save to the database. Upon further investigation it turns out that the operating system is a red herring and that this issue can in fact arise on any operating system!

The solution is to patch the database to 10.1.0.5 or higher. Apparently there is a one-off patch for 3668164, unless your database is running on a Windows server. In this case you will have to apply the full 10.1.0.5 database upgrade. So after that the client upgraded their database to 10.1.0.5 anyway, as many other bugs were fixed in this release.

After they upgraded their database the issue went away.

Note on this bug: bug 3668164 is not available for public viewing. This is most frustrating because Oracle has a lot of cross references to it on MetaLink.

Posted on behalf of:
Srinu Katreddi | Oracle Applications Consultant

How to determine Workflow Mailer status without using OAM

Have you been trying to determine the availability of WF Mailer without using OAM?

The following query provides the status, eliminating the need to use OAM.

SELECT Component_name, Component_Status
FROM FND_SVC_COMPONENTS
WHERE component_type = 'WF_MAILER'

Reference: Metalink Doc: 316352.1

Posted on behalf of:
Srini Ramanujam | Senior Oracle Consultant

Tuesday, August 7, 2007

Inside the Middle

I've written generally about Oracle Fusion Middleware but let's move down to the next level of detail. An easy way to do this is to focus on functionality. There are seven functional categories of products in Fusion Middleware, per Oracle.

The Unified Workplace group which provides collaboration tools (Groupware, Instant Messaging), portals, mobile/desktop presentation and secure search.

The Composition and Process Orchestration group which includes the Business Process Execution Language (BPEL) Process Manager and the Enterprise Service Bus (ESB). These products enable the integration of application services across disparate systems so everyone's communicating interactively.

The Development Tools which include JDeveloper. This is Oracle’s Java Integrated Development Environment (think Visual Studio for Java). Several development framework products (ADF, Toplink…) are also included here. They extend JDeveloper’s capabilities and simplify sophisticated Java development.
Development Tools also includes Oracle's Process Modeling tool which was licensed from Aris along with the Business Rules developer.

The Enterprise Application Server group consists of the 10g Application Server (J2EE), Services Registry and Web Services support.

The Security group which provides Identity Management for people/roles and for those rogue application services that might exist in a Service Oriented Architecture (SOA). Rogue? Sounds cool doesn't it?

The Management group of products includes Oracle Enterprise Manager, Web Services management and tools for BPEL/BAM monitoring. Like all good management it ensures that no one is slacking off including the database, network and you.

The Information and Aggregation Analysis group includes Oracle’s Business Intelligence Enterprise Edition (OBIEE), formerly known as Siebel Analytics. Also you will find Business Activity Management (BAM), Content Management, Oracle Master Data Management and BI Discoverer. There's a data warehouse in there somewhere but the focus is more on pulling information from operational systems.

With so many products in Fusion Middleware it’s easy to get lost and never want go back.

For that reason I’m seeing Oracle’s current marketing focus in the Fusion Middleware area being primarily with
  • Business Intelligence
  • Identity Management
  • Service Oriented Architecture
  • Business Process Management
These are nice industry buzzwords that most of us have encountered in the press. Oracle knows that and hopes it will help us get on board with the products. I think it's a good strategy.

I'll close this post with my perceptions on Fusion Middleware, in general. See if you agree.

1. Because of Fusion Middleware’s breadth there is a lot of uncertainty about how to leverage its capabilities. Right now there are a few early adopters, excepting Business Intelligence, who are really using the products effectively. But be aware. There is a wave forming out there just beyond your view that will bring this into the mainstream.

2. This family of products is one that will generate a lot of interesting work for everyone. How will that impact your career?

3. Business Process Management (BPA, BPEL) is not glorified Workflow though that’s an easy way to first categorize it. More on that in a future blog.

4. Fusion Middleware relies significantly on industry-standards (Java, XML, SOA, BEPL, etc…). That’s a good thing and we should all be sleeping better because of it.

5. Fusion Middleware will create new job roles in most organizations. Remember when there was no such thing as a “DBA”? Well, get ready for some new acronyms!

6. Middleware will put pressure on many of the barriers that currently exist between the technical IT types and the Functional Business types because it will require both to work much more closely than in the past. We are potentially talking about the end of the Departmental Cold War in many organizations.

7. The underlying principles of SOA have the potential to revolutionize the agility of business processing and the efficiency of IT beyond all we’ve seen in the past. I know that sounds a bit over the top but I've drunk the kool-aid and I'm not going back.

In my next post: Oracle also markets the Fusion Middleware products in several Suites which bundle them for easier marketing.

To help sort it out I’ll provide a table of the products that should help better illustrate all of this. Plus I'll try to make it interesting! That's should be worth a look, right?

Wednesday, August 1, 2007

Starting in the Middle

During the Cold War, US and Russian analysts spent millions of hours analyzing one another’s public pronouncements trying to understand what was really going on with the other side.

In the same manner today, we often find ourselves trying to understand what Oracle is trying to communicate (or not communicate) through their many announcements, white papers and presentations.

One area where this applies for me is the topic of Oracle Fusion Middleware (OFM). It's clear that there is some confusion around this moniker. So, I’ve been sorting through the loads of information on the web to come up with a coherent view of OFM that both explains it for me (duh) and at the same time allows me to communicate the proposed value of OFM more effectively with others. It all started with Fusion.

Several years ago Oracle introduced the word “Fusion” as a new term. Fusion, they said, would help companies reach “The sustainable competitive advantage achieved by continuous blending of business insight and process execution.” If that phrase leaves you wondering what was just said, welcome to my world...

At the highest level, Fusion is Oracle’s planned architecture (and strategy) for dynamically integrating disparate systems and functions in the organization to provide needed business value. It's key contribution is helping business applications adapt to the real world business processes in a more rapid manner.

Here's one simple example. Today you have a Procurement process that has several sub-processes like A->C->B->D. There's a problem though. You need to respond to market or regulatory changes quickly and that requires you to add additional steps and re-order these sub-processes. Today you call in the developers and plan a project. But what if you could do this with a minimum of software development using powerful tools and existing functionality already in the application? And even better, what if you could include both software and human workflow processes in the mix? If this sounds interesting, then let's move on. Otherwise, thanks for the read...

But there’s more than the term "Fusion"! And right now that “more” is Fusion Middleware. Unfortunately there is a linguistic issue that needs to be discussed.The controversy centers on the term, “Middleware”. It’s unfortunate but true that “Middleware” connotes different things to different persons. EBS, PeopleSoft, Seibel and other application users probably assume that Middleware isn’t of much interest to them because it sounds technical and of little business value. Database purists, on the other hand, see Middleware as one more application they have to interact with. At least some software developers are comfortable with the term, so all is not lost.

The reality is that Oracle Fusion Middleware is still trying to fit in. Like a new kid, OFM, has a funny name, looks uncoordinated and isn’t yet fully understood or accepted by the rest of the Oracle classmates.

A quick look shows that OFM consists of a large family of products that don’t necessarily have anything to do with each other. What they do have in common is that they can extend the business processes of the organization, leverage the strengths of the Oracle Database and generally rely on the Oracle Application Server for their livelihood.

And what really holds them together is that they are the building blocks for more powerful applications. Both the kind your business needs today and the ones that Oracle hopes to offer in the future. But this doesn't mean we should just sit and wait for some future Fusion applications release. Let's find out if there's any real business value available now in OFM.

Next week I'll take you on a look under the hood of Oracle Fusion Middleware. We'll look at what's there and where this new business value might be lurking.

Rob McMillen - Fusion Middleware Practice

Thursday, July 26, 2007

Webinar: Are you Ready for Fusion?

This is another in our Release 12 webinar series, and will be presented live, with the recorded replay available for registered attendees in the near future. This one hour webinar will be presented on August 15th at 10:30am CDT, and registration is available here.

Title
Are you Ready for Fusion?
Abstract
A Practical Guide to What You Should Know For those of you concerned about Fusion's place in your organization, this presentation will tell it like it is. We'll discuss the things you need to know - and prepare for - in order to be ready for Fusion when it arrives in December, 2008. This presentation is aimed at both technical and functional users, and we'll start with the most important questions of all: "What's the big deal? Why should I care?"

Webinar: Release 12 Accounting Setup Manager 101

This is another in our Release 12 webinar series, and will be presented live, with the recorded replay available for registered attendees in the near future. The webinar will be presented on August 8th at 1:30pm CDT, and registration is available here.

Title
Release 12 Accounting Setup Manager 101
Abstract
Learn the basics of the Release 12 Accounting Setup Manager from this exciting presentation. This new and powerful tool allows users to setup and maintain legal entities, ledgers, accounting rules, reporting currencies and intercompany transactions through a user-friendly interface.

Update: the webinar's recording is now available for viewing!

Webinar: Release 12 Java Infrastructure

This is another in our Release 12 webinar series, and will be presented live, with the recorded replay available for registered attendees in the near future. This one hour webinar will be presented on 8/8 at 10:30am CDT, and registration is available here.

Title
Release 12 Java Infrastructure
Abstract
Learn about the new Java infrastructure underlying the Release 12 environment. The new technology delivers significant changes to how the application server works. Topics covered include the architecture of the new technology components, and administration tasks necessary to support them.

Tuesday, July 24, 2007

Webinar: Release 12 Subledger Accounting Engine

This is another in our Release 12 webinar series, and will be presented live, with the recorded replay available for registered attendees in the near future. This one hour webinar will be presented on August 15th at 1:30pm CDT, and registration is available here.

Title
Release 12 Subledger Accounting Engine
Abstract
What it is, What it does, and How to use it. Be in the know by attending this presentation. The Subledger Accounting engine enables centralized processing of accounting from the subledgers in Financials. This presentation highlights features, functionality, setups, and processing.

Webinar: Release 12 Procurement Part I – The Professional Buyer's Work Center

This is another in our Release 12 webinar series, and will be presented live, with the recorded replay available for registered attendees in the near future. The webinar will be presented on 7/25 at 10:30am CDT, and registration is available here.

Title
Webinar: Release 12 Procurement Part I – The Professional Buyer's Work Center
Abstract
Exciting things are happening to the Procurement Suite in Release 12, as the whole module has been recoded with a new user interface and further integration of the various purchasing modules. In Release 12, contracts, services and sourcing have all come together in a coordinated fashion in the Buyer's Workbench.

Webinar: Release 12 Multi-Org Access Control (MOAC) – An Inside Look

This is another in our Release 12 webinar series, and will be presented live, with the recorded replay available for registered attendees in the near future. The webinar will be presented on 7/25 at 1:30pm CDT, and registration is available here.

Title
Release 12 Multi-Org Access Control (MOAC) – An Inside Look
Abstract
Join us as our Solution Architect focuses on Multi-Org Access Control (MOAC), which supports the enhanced shared service functionality in Release 12. Topics include detailed setup instructions and processing flows after MOAC is enabled. Technical impacts of the MOAC architecture will also be presented at a high level.

Simple Tutorial for Publishing FSG Reports Using XML Publisher

This simple tutorial will show you how to create a custom FSG Report using XML Publisher.

1. Log in to Oracle Applications and select the "XML Publisher Administrator" responsibility (your Applications Administrator will have to grant access to this responsibility).

2. Navigate to the Templates page.

3. Type FSG% in the Name field and click "Go to query" for the standard FSG template supplied by Oracle.

4. Click the "FSG: Basic Template" item.

5. Click the Download icon and, when prompted, save the template (RGFSGXPG.rtf) to your local file system.

6. Using your local file explorer, navigate to the location where you saved the RGFSGXPG.rtf file. Open the template in Microsoft Word – typically you can just double-click the .rtf file.

7. Update the template to suit your needs. A typical task would be to add a logo. The next few steps show how this can be done.

8. Place your cursor in the left-most cell of the title table

9. On the MS Word standard menu, click Insert->Picture->From File.

10. Navigate to and select an image file (.bmp, .jpg, .gif, etc) containing your company logo to insert the image.

11. Save the personalized template with a new file name.

12. Navigate back to the Templates page and select the "Create Template" button.

13. Fill in the required fields similar to that shown in the following figure and click "Apply". Note that Type must be "RTF" and Data Definition must be "FSG program".

14. This template should now work with any existing FSG report.

15. Logon to Oracle Applications and select a responsibility that has access to run FSG reports such as General Ledger Super User.

16. Navigate to Requests and submit a new request.

17. Query on Name = "Program – Publish FSG Report"

18. Fill in the required parameter values and click OK. Note that the value for the Report parameter should be the name of the FSG report to be run. The value for Period is the desired period to be processed. Select the template that was created in the previous steps for the value of the Template field.

19. Submit the request and then query your requests to see the results. Notice that the Program – Publish FSG Report request has spawned 2 other requests. The first runs the selected FSG report to produce XML output. You can view and save the XML data by selecting the first spawned request and then clicking View Output. The second request processes the XML output with XML Publisher to produce the final report. You can view and save the final report by selecting the second spawned request and clicking "View Output". This entire process can also be scheduled using the standard technique for scheduling concurrent requests.

Your final report should now be neatly formatted and complete with your company logo!

Posted on behalf of:
Tim Sharpe | Solution Architect

Wednesday, July 18, 2007

Introducing the Solution Beacon Release 12 Webinar Series

We're pleased to announce our first Release 12 Webinar Series! These live webinars range from 30 to 60 minutes long and are intended to get people informed about the new Oracle Release 12 E-Business Suite. Topics include a Technical Introduction for Newcomers, Security Recommendations, and reviews of the new features in the apps modules, so whether your interest is functional or technical you're sure to find a topic of interest.

Stay tuned to our newsletter, or check back here for details of the presentations that are being scheduled and how to sign up for them. In fact, if you've not signed up for our RSS feed, this might be a good time to do so.

Friday, July 13, 2007

Architectural Differences in Linux

In this second edition in the Evaluating Linux series of posts I want to discuss what is both one of the strengths and weaknesses of Linux, namely the architectural differences between it and the traditional UNIX platforms. The relevant architectural differences between Linux and UNIX (AIX, HP-UX, Solaris: take your pick) can be viewed as several broad categories:
  • hardware differences
  • filesystem selection
  • flexibility
  • scalability
Hardware
Many of the core attributes of Linux come from the hardware that it runs on, in most cases Intel compatible systems. Typically whether these use actual Intel processors, or the extremely competitive AMD processors, these are referred to as x86 or x86-64 systems.

In the commodity server market most systems have a limit of 4 CPU chips per server, which is based upon limitations with the chips and motherboards. To address this AMD and Intel are producing chips that allow for two or four cores per chip, with eight and more on their near term roadmaps. Each core is itself a full CPU, and combining multiples of them onto a single chip allows the sharing of things like cache memory, helping to reduce the demands placed on the comparatively slow RAM and I/O bus.

Beyond physical processing limitations, most servers have a single I/O bus, limiting very I/O intensive applications. This bus design is more than adequate to support your average database or application workloads, but where it can fall short is in processing applications that require truly large amounts of data movement, like datawharehousing or imaging systems. Both of these limitations are areas where the traditional UNIX hardware shine. Because of their use of specialized processors, busses, and I/O chips they are able to scale into the dozens of CPUs per server, or into the gigabytes per second of I/O load. This also helps to put the price differences between commodity x86 servers and proprietary UNIX servers into perspective.

Filesystems
One of the aspects of open source software is that many people try to improve on the status quo, and one of it's strengths is that they often succeed. This is evident in the selection of filesystems available for Oracle databases running on Linux. Almost all relevant Linux distributions ship with the EXT3 filesystem as the default, and it's not uncommon to see it housing Oracle binaries and data files. This is a non-clustered filesystem suitable for general purpose use.

In a clustered environment, such as an architecture built around Oracle RAC, EXT3 cannot be used for the database because of it's lack of cluster support. Instead Oracle offers two choices of their own: OCFS2 and ASM. OCFS2 is designed as a clustered filesystem which allows data files to be accessed simultaneously by multiple servers. As an alternative to OCFS there is ASM, which uses what amounts to raw disk partitions to house the data blocks, and a specially designed Oracle instance to manage them. ASM has the advantage over OCFS2 in that it is supported on many platforms beyond Linux, and also because it offers advanced features like optimizing data block placement and data protection.

Another aspect of filesystems on Linux, and a partial explanation as to why there are choices to be made, is that of I/O performance. This article in the Red Hat Magazine is a good source of information on the topic, and provides some tips for performance improvement. This appears in the Oracle environment as topics such as asynchronous I/O, and I'll highly recommend looking into these issues MetaLink to see how to get the best performance from a Linux database server.

Flexibility
Linux is already equipped with many tools that make it ideal to services like web servers, applications servers, and file servers. In contrast, these tools have to be added to traditional UNIX systems, which can be a difficult process for even a veteran sysadmin. In fact many opensource tools are developed directly on Linux and then ported to other versions of UNIX.

Additionally the potential for a lower hardware costs makes it possible to implement servers that are dedicated to particular functions such as administrative tools, which typically could be cost prohibitive in a UNIX environment.

Scalability
Hardware design limits, and advances in technology like the distributed programming models, are causing vendors to write applications that can scale outwards onto multiple servers instead of upwards onto a larger one. The days of a large mega-server that sits at the heart of an enterprise application are gone, and have been replaced with a collection of servers each running some component of the application.

This has the side effect of requiring solid management tools to perform tasks like monitoring and maintaining those distributed servers. Both Oracle and Red Hat offer some assistance on the management side by providing tools which can help with patch management.

Evolution
The final major difference between the newcomer Linux and the entrenched UNIX products is one of simple evolution. Because of it's open design and the number of people contributing to it, Linux is evolving at a pace that no traditional vendor can really match. This double edged sword both helps by bringing new features into the OS at a faster pace, but sometimes it cuts the other way by forcing upgrades. The current transition from 32-bit to 64-bit systems is a prime example. In this case the hardware and operating system components were very simple, but the unknowns, and thus the pain, came from running vendor applications on 64-bit platforms, where the application was only partially supported, or needed various bug fixes to make it work.

Conclusion
Linux requires balance between complexity and support costs, since each additional server that it brings to the architecture has purchase, maintenance, and administration costs. It might also push you into decisions that you might otherwise avoid, like the choice of learning the new ASM components. On the other hand it's flexibility might also make it a better choice for a web tier or tools server, make it a logical part of a heterogeneous environment.

Thursday, July 5, 2007

Verifying a Virtual X-Server (Xvfb) Setup

With E-Business Suite Release 11i and Release 12, an X-Server display is required for correct configuration. The application framework uses this for generating dynamic images, graphs, etc. It is also needed by reports produced in bit-map format. Note that for functionality using Java technology, the “headless” support feature can be implemented (requires J2SE 1.4.2 or higher). However, reports that print graphical output still need an X-Server display.

The Solution Beacon recommendation is to install and configure Xvfb, the virtual frame buffer for X windows. A dedicated VNC server is another solution that can be used, but has a little more overhead and additional security concerns. See Oracle MetaLink note 181244.1 “Configuring VNC Or XVFB As The X Server For Applications 11i” for more information and setup guidelines. Also, a window manager such as Motif (MWM) or TWM running on the virutal display is neccessary for some reports to work correctly.

E-Business Suite Diagnostics will confirm correct X-Server setup, but other than running a report that creates the expected bit map output or displaying a page that creates a dynamic chart, there isn’t an easy way to visually check that the virtual X display is working…. enter the xwd and xwud utilites! The xwd utility allows you to “dump” a screen (even a virtual screen!) and the xwud allows you to “undump” the contents to another X window. So, how can these tools be used to verify an Xvfb setup? Read on to find out!

Procedure
Using xwd to copy a virtual screen and then using xwud to send the screen to an X display, we can visually determine that Xvfb is setup and working correctly. To do this we need 1) a system running the Xvfb, and 2) another system running an X windows server. The process is to capture the image on the virtual screen and display it on a client screen of the system the X windows server. If the image displays correctly we know that the virtual X server is working correctly. The X windows server can be any garden-variety X system, either a Windows PC running Exceed, or an X display on a server running VNC.

The procedure is the following (commands are specific to Linux, other platforms may have different paths or have slightly different syntax for starting Xvfb):

1. On the server that runs the virtual X server, start Xvfb:
/usr/X11R6/bin/Xvfb :1 -ac -screen 0 1024x768x8 &
2. On the server running Xvfb, run the xclock client program:
xclock -display :1 &
3. On the system running the real X server, make sure X is running and clients can connect:
xhost +
4. On the server running Xvfb, get the window id for the xclock client
xwininfo -root -tree | grep xclock
5. On the server running Xvfb, use xwd to capture the window and xwud to send it to the X server
xwd -id 0x12345678 | xwud -display other_system:0.0

Conclusion
If all is working correctly, the xclock virtual window on the system running Xvfb should display on the screen of system running the X server. If not, the xclock client running on the Xvfb system would not create the clock image correctly and it would not be displayed on the other system.

Monday, June 25, 2007

Recompiling executables results in "undefined reference to `__pure_virtual' "

In a recent upgrade from 11.5.9 to 11.5.10.2, on Red Hat Enterprise Linux 4, I ran into a problem where certain executables were not compiling. As it turned out, the executables (ENCACN, WICDOL, WICMEX, WICMLX) were all compiled with g++.

This was our 3rd iteration of the upgrade and this behavior had not been observed in prior upgrades. The difference between interation #2 and iteration #3 was that the operating system level had been upgraded to update 5. This resulted in me starting from a clean slate and going through all the prerequisites to make sure something was not missed.

The most obvious place to me was to look at the environment variable LD_ASSUME_KERNEL. A quick check on the command line indicated that it was already set and this was not my problem. On a side note, this variable is set by the script $AD_TOP/bin/adgetlnxver.sh which is called by $APPL_TOP/$CONTEXT_NAME.env which in turn is then called by $APPL_TOP/APPS$CONTEXT_NAME.env.

oradev@app-dev01> echo $LD_ASSUME_KERNEL
2.4.19

Next, I checked the version of gcc and g++ to make sure those executables were pointing at the corrected versions.

Using the command gcc -v and g++ -v should yield the following results:
gcc version 3.2.3 20030502 (Red Hat Linux 3.2.3-47.3)

The obvious prerequisite RPMs were there:
compat-db-4.1.25-9
compat-gcc-32-3.2.3-47.3
compat-gcc-32-c++-3.2.3-47.3
compat-libgcc-296-2.96-132.7.2
compat-libstdc++-296-2.96-132.7.2
compat-libstdc++-33-3.2.3-47.3
xorg-x11-deprecated-libs-devel-6.8.1-23.EL
xorg-x11-deprecated-libs-6.8.1-23.EL
openmotif-2.1.30-x

We also had the two following RPMs installed deliverd via Oracle patch 4198954 (COMPATIBILITY PACKAGES FOR ORACLE ON RHEL 4):
compat-oracle-rhel4-1.0-5
compat-libcwait-2.0-2

Unfortunately this particular situation was not publicly documented on MetaLink. There were other hits on __pure_virtual and ENCACN, but none of them were applicable. The solution was to uninstall patch 4198954 and then reinstall it. This is supposed to be documented in MetaLink Doc ID: 435078.1 "Relink errors with ENCACN on Red Hat 4.0", but at the time of this post was not an externally viewable document. This resulted in performing the following steps below as the user root.

Remove the following packages:
rpm -ev compat-oracle-rhel4 rpm -ev compat-libcwait

Reinstall the following packages:
rpm -ivh compat-libcwait-2.1-1.i386.rpm
rpm -ivh compat-oracle-rhel4-1.0-5.i386.rpm

Once the packages were reinstalled, we were then able to successfully compile ENCACN using the following command:
adrelink.sh force=y "eng ENCACN"

I then went into adadmin and recompiled all the executables to ensure nothing else broke as a result of this. The typical caveats apply. Do this in a test environment first and shutdown the application before recompiling the executables.

Brian Bent | Solution Architect | Solution Beacon

Wednesday, June 20, 2007

Database Growth and Solutions Part III

Well this is our final in the series on Database Growth and Solutions for that growth. Today with focus on Database Archiving and Hierarchical Storage Management (HSM).

Database Archiving

Database archiving provides the ability to archive data that is seldom accessed off to various storage options while retaining the ability to easily access that data and ensuring referential integrity as well as providing the ability to easily remove that data once its retention requirements have been met.

First you need to determine what data you wish to archive and what your data retention and availability requirements are for that data. How long are you willing to wait to retrieve that historical data? Can you view that data through a different medium other than your current application? Are you at risk if you do not archive your data?

Among the options for archiving data are:
  • Backups/snapshots of the database in which you keep the data available for as long as you it. This unfortunately is just a single snapshot and does not address a rolling archive type strategy.
  • For certain data, you can export the data and keep it as a dmp file or extract the data into a CSV file.
  • You can build a datamart/datawarehouse or another reporting database and relocate the data.
  • Third party tools that will extract the pertinent data, file/store it and then provide the ability to remove it from the source database. Many third party vendors provide these tools and they are constantly improving their products to be able to execute the archiving capability against most modules in the E-Business Suite.

Make certain that if you back-up data that you may not need for several years, that you take into consideration the software and platform that it currently resides on. You may want to keep a backup copy of the OS software and application/database binaries filed away safely as well. You may be forced to rebuild an environment to be able to retrieve that data and then find out you can’t get that OS and software anymore.

In comparison to partitioning, this data, once it has been archived, can be fully removed from the source database to the media/format you have chosen (see Hierarchical Storage Management below). Depending on the complexity of your data, that is where the effort becomes a great deal more difficult to implement if you attempt to build a custom developed solution to archiving. Again, it may be easier and more cost effective to use the third party vendors to implement your strategy.Hierarchical Storage Management (HSM)

It’s time to realize that not all data is equal. Some data is business critical and needs to be accessed in milliseconds. But much of the data we accumulate is not so critical nor does it require the same level of access. But ask yourself this hard question: how much of your data do you store on expensive, highly-redundant storage but which is rarely accessed and is not business critical? That’s where HSM comes in.

Hierarchical Storage Management (HSM) views all data as being in some phase of its “lifecycle”. Like most of us a piece of data is born, serves some purpose and slowly declines in value to the organization. That’s not a very happy thought for human beings but for data we can be less emotional.

A typical data lifecycle would include points where it is transactional, referential, historical, auditable and, finally, disposable. Transactional data is business critical and highly relevant to operations. It requires high speed access and experiences high incidences of retrieval. On the other end of the lifecycle, auditable data requires lower access speeds and also low incidences of retrieval. Plus it may be read-only at this point in its life. So why store both in the same storage environment and hassle with the performance degradation?

Here are some points to get you started:
  • Evaluate your data store, categorizing various types of data into one of the data lifecycle phases. How often is it accessed? How fast is it needed? What value does it have? Who owns it? How many users require it?
  • Consider where the data is on your storage platforms. Could it be more efficiently stored elsewhere? You probably would cringe at the thought but there is probably some data that needs to be relegated to microfiche and much that can be archived to tape and stored.
  • Evaluate your Service Level Agreements for data management with your stakeholders. Help them see the value of HSM.
  • Evaluate legal requirements for data storage – does the data need to be easily accessible, or merely accessible in some format? Is there any data – historical mailnotes come to mind – that must be available in the case of a lawsuit, but legally only has to be available only on paper, for the purposes of discovery?
  • Consider the options for HSM tiers of data storage. Here are the most popular.
  1. Tape Backups
  2. Microfiche
  3. Secondary data storage (lower-cost and slower storage)
  4. Printed
  5. Optical Disk
  6. Delete it
  • Explore the HSM options from storage vendors.
  • Publish your HSM policy and ensure the buy–in of the data owners.

We recommend that you start where you can show big storage wins quickly.

Please respond back if you have other solutions and comments that we could add to a future blog.

Tuesday, June 19, 2007

Fun with Linux Filesystem Labels

Most things in life involve a little Give and Take, and filesystems in Linux are certainly no exception.

On the Give side of the equation Linux offers the ability to identify a filesystem not only by its traditional device file name, such as /dev/sde1, but also by a unique label that you can apply to the filesystem.


On the Take side, if you've ever supported a Linux system attached to SAN storage you know that a simple act such as adding or removing a LUN can cause Linux to remap your /dev/sd entries, causing what used to be /dev/sde1 to become /dev/sdd1.


The problem is readily apparent when you reboot the server. Your server is no longer able to locate the /dev/sde1 device in order to mount it at /oracle!

This problem can be solved by using filesystem labels, which are entries in the header structure of an EXT3 filesystem. By using the following command (logged in as root of course)
tune2fs -L oracle /dev/sde1


you can set the label for the filesystem "/dev/sde1" to the keyword "oracle".


While labels allow the use of some special characters like / and _ in their names, I recommend keeping the label simple and indicative of where the fileystem belongs.


To see the results of your change, use
tune2fs -l /dev/sde1   (note the lowercase "L")


Then to make this change effective alter the /etc/fstab entry for /oracle and replace the device file with a special label keyword as shown here:

LABEL=oracle /oracle ext3 noatime 1 1


This approach works for the native Linux EXT3 filesystem as well as for Oracle's OCFS and OCFS2 filesystems.

Friday, June 15, 2007

RDBMS CPU Patch 5901881 Gotcha

I recently ran across this issue while applying the RDBMS CPU patch (5901881) for version 10.2.0.2.

To start off with a little background information:
ORACLE_HOME - /opt/oracle/testdb/10.2.0
Operating System - HP-UX 11.11
Installation - cloned from another ORACLE_HOME

OPatch returned the following error to me while applying patch 5901881:
INFO:Running make for target libnmemso INFO:Start invoking 'make' at Thu May 24 11:23:09 EDT 2007Thu May 24 11:23:09 EDT 2007
INFO:Finish invoking 'make' at Thu May 24 11:23:09 EDT 2007
WARNING:OUI-67200:Make failed to invoke "/usr/ccs/bin/make -f ins_sysman.mk libnmemso ORACLE_HOME=/opt/oracle/testdb/10.2.0"....'ld: Can't find library: "java"

Well as it turns out since this ORACLE_HOME was cloned from another ORACLE_HOME, the file $ORACLE_HOME/sysman/lib/env_sysman.mk does not properly get updated with the new ORACLE_HOME information. You need to edit this file and update the variable JRE_LIB_DIR to your correct ORACLE_HOME.

Since this is a HP-UX environment, the information I am presenting is going to be specific to this platform.

Here is what the entry was prior to me correcting it and the new updated entry:
OLD - JRE_LIB_DIR=/opt/oracle/devdb/10.2.0/jdk/jre/lib/PA_RISC2.0
NEW - JRE_LIB_DIR=/opt/oracle/testdb/10.2.0/jdk/jre/lib/PA_RISC2.0

Once I made the correction, I was able to successfully run opatch apply again. The closest MetaLink note I found was Doc ID: 418557.1 "'/usr/bin/ld: cannot find -ljava' occurs while applying one off patch." For all practical purposes, you should not see this issue unless you are applying a patch that has to relink the libnmemso executable AND are patching a cloned ORACLE_HOME.

Brian Bent, Solution Architect, Solution Beacon

Wednesday, June 13, 2007

BLOGS, BLOGS, BLOGS

Sounds like the title of a bad 50’s sci-fi movie, doesn’t it? Blogs are cool, they are in, everyone is doing them – a defining attribute for our current society. At a macro level, blogs have made the world smaller and with a global audience, the stage is a study in contrasts, from teens baring their souls in online diaries to presidential candidates wooing prospective voters.


Given this societal propensity for blogging, Solution Beacon has stepped on to that global stage with a goal to serve the Oracle community through thought-provoking topics emanating from our vast base of experience. We want to spark conversation, address issues, ask questions and provide a forum where straight talk can be embraced, considered and assimilated into a global Oracle mindshare, if you will.


We will post things that we come across, snippits from white papers or presentations we have prepared, issues that we have encountered, areas that people have interest in but not a lot of documentation exists. In turn we want you to send us the topics that are “in your face” right now – those nagging issues that remain on the corner of your desk, feedback on our Newsletter or this blog, questions on something you’ve read or heard, we’ll take them, raise them to the community and explore the responses that are sure to be very interesting and insightful.


So, take a few minutes and send us your topics – this forum is for you!!

Tuesday, June 12, 2007

Database Growth and Solutions Part II

This is our 2nd part, of three, sessions discussing database growth and the possible solutions to deal with such growth. Just an FYI, this discussion stems from a presentation by John Stouffer, Solution Beacon, and Rich Butterfield of HP. The Database Growth: Problems and Solutions presentation can be found on the Solution Beacon web site at the following link http://www.solutionbeacon.com/ind_pres.htm


Add Capacity through Hardware Upgrades

This isn’t really a solution and reflects that you cannot remove any more data, and you have, typically, exhausted all means of controlling/minimizing your data growth. So now what?


This is when you need to manage with what you have and plan for the future to control where you go; either that, or at least provide enough time to plan for a new career elsewhere.


  • Perform some type of capacity planning effort to see how fast you are growing and how quickly you will outgrow or outperform what you currently have

  • Start tracking data growth
  • Start planning on scaling up with more/faster CPUs and additional RAM, or scaling out, with RAC solutions. You can plan years ahead and stretch the cost to ensure you can keep performance and availability in check

  • Consider as much of the data deletion/purging activities as possible


This doesn’t solve the problem, it just delays the inevitable from occurring.


Decentralize or Do Not Consolidate Data

Think twice before consolidating your databases into one large database. It is sometimes much easier to manage a few small to medium size instances than it is to manage a large growing one. If one database is more or less static, stable and less likely to grow and then you consolidate it with another instance that is growing rapidly and more likely to encounter performance and data issues, you have now caused that stable instance to be in a state of disarray too. You now have twice as many unhappy users. Certainly choosing not to centralize your data may not help control IT resources, costs and overall manageability, but it may be the best solution for your enterprise.


Database Partitioning


Partitioning allows the segregation of data (tables/indexes) into smaller segments, while maintaining a seamless view of all the data as a whole.
Figuring out the best partitioning approach for your tables and indexes can take a considerable amount of analysis, but if implemented correctly, can potentially reap extensive performance gains and potential storage savings.
Older, lightly used partitions can be ported to cheaper, lower end storage solutions. Additionally, depending on the configuration of your partitions, when cloning to other instances, the partitions can be removed, thus reducing the storage needs on target servers.


Partitioning is standard out of the box for only a select set of Oracle Application modules and custom partitioning must have the database partitioning option licensed with Oracle.
Partitioning still doesn’t address the data growth issue and takes a considerable amount of ongoing maintenance and support to maintain the partitions and performance of the partitions.


So, stay tuned for next weeks final posting on Database Growth and Solutions where we will discuss Database Archiving and Hierarchical Storage Management (HSM)

Tuesday, June 5, 2007

Effects of Database Growth and Real Solutions to deal with that Growth

With today’s never ending need to retain every little bit of data, be it to meet regulatory requirements, end-user needs (or desires), business intelligence and trend analysis, overall business growth or just the consolidation of data from various systems, we are trying find solutions to address and compensate for this never-ending data growth.

We see the effects on Applications and databases as a result of keeping all this data, typically through failure to meet service levels as a result of performance degradation, reduced availability, lengthened time to execute backups and recovery, clones, upgrades, as well as the overall TCO just to maintain, support and plan for the constant data growth.

Several questions to be addressed include:

  • Are you keeping the right data for legal and compliance requirements?
  • Are the users getting the data and information they need or losing productivity trying to sort through it all?
  • Is it taking an unacceptable amount of time to retrieve the data or execute processes against the current volume of data?
  • Is the IT staff overwhelmed just trying to keep up with all the requirements and overhead to maintain it all? Aren’t they already working 16 – 24 hour days ?

So, what do we do about it? Get rid of the users? Delete all the data? Rally the government and try and change the legal requirements? Spend a lot of money and effort and just keep growing?

Not likely much of that can occur, so I guess we need to look at real solutions.

Here are some viable solutions that could be considered:

  • Data Deletion/Purging
  • Add Capacity through HW Upgrades
  • Decentralize or Do Not Consolidate Data
  • Database Partitioning
  • Database Archiving
Over the next few weeks we will follow-up discussion on these solutions and what is really feasible for you, today starting with the things you can do in your current infrastructure, with virtually no financial investment.

Data Deletion/Purging
I did say we would have a tough time removing all of the data, but that’s not to say that we can’t remove some of the data, irrelevant data, redundant data or historical data that is no longer required. Apart from your database, take a look at your OS and see where you have a tendency to waste space. This may not be a substantial amount in one day, but over a month, or cloned to several other environments without cleanup, miscellaneous data tends to accumulate in an enterprise-wide environment.

Suggestions:
  • Keep only what you need from the concurrent request output and logs. Do you really need 60 days online, can you live with 30 or less?
  • Put the Concurrent request out/logs onto a cheap storage solution/mount, especially for Production instances, where you are typically running faster, and more expensive high availability type storage. Look for other areas where you can move files to cheaper storage. In some cases, even your archive log destination can be on cheaper storage. Be sure you have some redundancy in place to ensure archive logs are not lost. Can you back up the concurrent request data to the other tables so as not to impact daily OLTP performance.
  • Remove or move those patch files and logs. Go look at your patch repository and admin/log directory and see how much data you have. You’ll be surprised at what you find, especially if you just did a major upgrade. Back up the files and then get rid of them. Ensure you get a backup of all of those patches you applied. Once the patches are superseded it may be difficult to get them again.
  • Remove the need for redundant online backup storage. Depending on your internal infrastructure and backup procedures and requirements, you may be able to run your backups directly to tape if you are currently running backups to disk and then off to tape. If not, then consider consolidating your backup storage to a single SAN/NAS solution so all servers can share the storage instead of each server having its own storage. If considering this option, please make sure this fits into your individual needs and can ensure you meet your backup and recovery requirements.
  • Keep an eye on your support logs generated by alerts, traces, apache, jserv, and forms processes. If they are not cleaning themselves out and/or just making copies of themselves after reboots, you may want to get rid of the old ones.
  • Keep what you clone to a minimum and take over only what you need and, especially, not all of the logs. Reduce your temp and undo space. You, typically, don’t need as large a foot print in your cloned environment as you do in a Production instance. If you have the luxury, subset or purge the data as part of your clone. If you are partitioning, only take the partitions you need. There are many tools out there that can provide you the ability to subset (take only a portion of the data) from your Production instance, as opposed to all the historical data. HP Rim has a great subset product that can remove as much historical data as you want.
  • Keep an eye on temporary and interface tables to make sure they are being cleared out. You may want to develop an alert/trigger to monitor these for you.
  • Monitor your workflow tables and schedule the “Purge Obsolete WorkflowRuntime Data”. Note, if you have not kept up on the purging of your WF tables, after you have run the Purge, you may want to rebuild the tables/indexes, as performance may, initially, be considerably worse. Also take a look at MetaLink Doc Id 277124.1 for good workflow purge items.
  • Take a look at your applications and review what concurrent purge jobs you may want to consider. Many modules have their own purge programs. Try and be proactive and prevent the data growth from happening.
Remember, once the data is deleted, short of a recovery, the data is gone and can not be retrieved. So make sure you know what you are deleting and that the auditors and users are ok with it.

Wednesday, May 23, 2007

Resetting a lost oc4jadmin password

A common question is how do I change the oc4jadmin password when I don't already know it? If you have access to the application owner's UNIX account its quite easy. But note that it does require a restart of the 10g application server. Since you've lost the password I'm hoping that you're on a development system, so restarting the application server shouldn't be a big deal for you.

  1. Start by locating the correct Oracle home, let's assume that it's .../10gas/10.1.3 for our example
  2. Do a quick check to see that it has the "home" OC4J instance running in it as shown here
    oracle@myhost > /oracle/10gas/10.1.3/opmn/bin/opmnctl status

    Processes in Instance: demo.myhost
    ---------------------------------+--------------------+---------+---------
    ias-component | process-type | pid | status
    ---------------------------------+--------------------+---------+---------
    OC4JGroup:default_group | OC4J:bixmlpserver | 17003 | Alive
    OC4JGroup:default_group | OC4J:bianalytics | 17002 | Alive
    OC4JGroup:default_group | OC4J:bijmx | 17001 | Alive
    OC4JGroup:default_group | OC4J:home | 17000 | Alive
    ASG | ASG | N/A | Down
    HTTP_Server | HTTP_Server | 16999 | Alive

  3. Now edit the .../10gas/10.1.3/j2ee/home/config/system-jazn-data.xml
  4. Search for the keyword "oc4jadmin" to locate the following stanza:
        <user>
    <name>oc4jadmin</name>
    <display-name>OC4J Administrator</display-name>
    <guid>4FE81440BD2911DBBF8EED2D8B2D4B8C</guid>
    <description>OC4J Administrator</description>
    <credentials>{903}YD4N1akwPa8FxfnwvqTAT76FCx62bGsfU8Kzd2p+IJQ=</credentials>
    </user>

  5. Change the content of the credentials tag to something like this:
    <credentials>!mynewpassword</credentials>
    Note the exclamation mark at the beginning of the new password. It marks the entry as a non-encrypted password value.
  6. Save the file

  7. Use .../10gas/10.1.3/opmn/bin/opmnctl to restart the OC4J apps server

  8. Login to the applications server as the oc4jadmin user to verify that your new password works
  9. If you check the system-jazn-data.xml file after the restart you should see that your edit has been replaced by an encrypted version of the password

Friday, March 23, 2007

Evaluating Linux

For the last few years it's been difficult to open any trade journal or technology website without seeing articles on Linux. A trip past the local bookstore's technical racks only amplifies this fact. Indeed there are enough articles, magazines, and books that it's easy to feel that you're being left behind! And in fact if your organization hasn't at least looked into implementing Linux then you are.


Since it's inception, Linux has grown from a hobbyist implementation of UNIX to an enterprise platform ready for the most demanding applications. This growth has been neither quick nor painless but it has resulted in a operating system that is undeniably ready for use. The question however is this: is Linux ready for your use?


This series of posts will address many of the things that need to be understood before deciding to add Linux to your Oracle implementation. Because the Oracle EBusiness Suite is a mission critical component of your business, everything that supports it becomes mission critical, making a major technology change something that needs careful analysis. Among the topics that this series will cover are: architectural differences, scalability, integration, support, and cost factors.

Wednesday, February 21, 2007

Welcome!

Welcome to the new Solution Beacon blog site! This blog will carry news and technical articles from Solution Beacon that are of interest to the Oracle user community and our client base. In the ever changing world of the Oracle E-Business Suite we hope that this site will become a tool that fosters faster and more interactive communications.


P.S.: As this site becomes active over the coming weeks don't be surprised if things seem to move around, it will take us a little while to become proficient at using this tool.