Monday, June 25, 2007

Recompiling executables results in "undefined reference to `__pure_virtual' "

In a recent upgrade from 11.5.9 to 11.5.10.2, on Red Hat Enterprise Linux 4, I ran into a problem where certain executables were not compiling. As it turned out, the executables (ENCACN, WICDOL, WICMEX, WICMLX) were all compiled with g++.

This was our 3rd iteration of the upgrade and this behavior had not been observed in prior upgrades. The difference between interation #2 and iteration #3 was that the operating system level had been upgraded to update 5. This resulted in me starting from a clean slate and going through all the prerequisites to make sure something was not missed.

The most obvious place to me was to look at the environment variable LD_ASSUME_KERNEL. A quick check on the command line indicated that it was already set and this was not my problem. On a side note, this variable is set by the script $AD_TOP/bin/adgetlnxver.sh which is called by $APPL_TOP/$CONTEXT_NAME.env which in turn is then called by $APPL_TOP/APPS$CONTEXT_NAME.env.

oradev@app-dev01> echo $LD_ASSUME_KERNEL
2.4.19

Next, I checked the version of gcc and g++ to make sure those executables were pointing at the corrected versions.

Using the command gcc -v and g++ -v should yield the following results:
gcc version 3.2.3 20030502 (Red Hat Linux 3.2.3-47.3)

The obvious prerequisite RPMs were there:
compat-db-4.1.25-9
compat-gcc-32-3.2.3-47.3
compat-gcc-32-c++-3.2.3-47.3
compat-libgcc-296-2.96-132.7.2
compat-libstdc++-296-2.96-132.7.2
compat-libstdc++-33-3.2.3-47.3
xorg-x11-deprecated-libs-devel-6.8.1-23.EL
xorg-x11-deprecated-libs-6.8.1-23.EL
openmotif-2.1.30-x

We also had the two following RPMs installed deliverd via Oracle patch 4198954 (COMPATIBILITY PACKAGES FOR ORACLE ON RHEL 4):
compat-oracle-rhel4-1.0-5
compat-libcwait-2.0-2

Unfortunately this particular situation was not publicly documented on MetaLink. There were other hits on __pure_virtual and ENCACN, but none of them were applicable. The solution was to uninstall patch 4198954 and then reinstall it. This is supposed to be documented in MetaLink Doc ID: 435078.1 "Relink errors with ENCACN on Red Hat 4.0", but at the time of this post was not an externally viewable document. This resulted in performing the following steps below as the user root.

Remove the following packages:
rpm -ev compat-oracle-rhel4 rpm -ev compat-libcwait

Reinstall the following packages:
rpm -ivh compat-libcwait-2.1-1.i386.rpm
rpm -ivh compat-oracle-rhel4-1.0-5.i386.rpm

Once the packages were reinstalled, we were then able to successfully compile ENCACN using the following command:
adrelink.sh force=y "eng ENCACN"

I then went into adadmin and recompiled all the executables to ensure nothing else broke as a result of this. The typical caveats apply. Do this in a test environment first and shutdown the application before recompiling the executables.

Brian Bent | Solution Architect | Solution Beacon

Wednesday, June 20, 2007

Database Growth and Solutions Part III

Well this is our final in the series on Database Growth and Solutions for that growth. Today with focus on Database Archiving and Hierarchical Storage Management (HSM).

Database Archiving

Database archiving provides the ability to archive data that is seldom accessed off to various storage options while retaining the ability to easily access that data and ensuring referential integrity as well as providing the ability to easily remove that data once its retention requirements have been met.

First you need to determine what data you wish to archive and what your data retention and availability requirements are for that data. How long are you willing to wait to retrieve that historical data? Can you view that data through a different medium other than your current application? Are you at risk if you do not archive your data?

Among the options for archiving data are:
  • Backups/snapshots of the database in which you keep the data available for as long as you it. This unfortunately is just a single snapshot and does not address a rolling archive type strategy.
  • For certain data, you can export the data and keep it as a dmp file or extract the data into a CSV file.
  • You can build a datamart/datawarehouse or another reporting database and relocate the data.
  • Third party tools that will extract the pertinent data, file/store it and then provide the ability to remove it from the source database. Many third party vendors provide these tools and they are constantly improving their products to be able to execute the archiving capability against most modules in the E-Business Suite.

Make certain that if you back-up data that you may not need for several years, that you take into consideration the software and platform that it currently resides on. You may want to keep a backup copy of the OS software and application/database binaries filed away safely as well. You may be forced to rebuild an environment to be able to retrieve that data and then find out you can’t get that OS and software anymore.

In comparison to partitioning, this data, once it has been archived, can be fully removed from the source database to the media/format you have chosen (see Hierarchical Storage Management below). Depending on the complexity of your data, that is where the effort becomes a great deal more difficult to implement if you attempt to build a custom developed solution to archiving. Again, it may be easier and more cost effective to use the third party vendors to implement your strategy.Hierarchical Storage Management (HSM)

It’s time to realize that not all data is equal. Some data is business critical and needs to be accessed in milliseconds. But much of the data we accumulate is not so critical nor does it require the same level of access. But ask yourself this hard question: how much of your data do you store on expensive, highly-redundant storage but which is rarely accessed and is not business critical? That’s where HSM comes in.

Hierarchical Storage Management (HSM) views all data as being in some phase of its “lifecycle”. Like most of us a piece of data is born, serves some purpose and slowly declines in value to the organization. That’s not a very happy thought for human beings but for data we can be less emotional.

A typical data lifecycle would include points where it is transactional, referential, historical, auditable and, finally, disposable. Transactional data is business critical and highly relevant to operations. It requires high speed access and experiences high incidences of retrieval. On the other end of the lifecycle, auditable data requires lower access speeds and also low incidences of retrieval. Plus it may be read-only at this point in its life. So why store both in the same storage environment and hassle with the performance degradation?

Here are some points to get you started:
  • Evaluate your data store, categorizing various types of data into one of the data lifecycle phases. How often is it accessed? How fast is it needed? What value does it have? Who owns it? How many users require it?
  • Consider where the data is on your storage platforms. Could it be more efficiently stored elsewhere? You probably would cringe at the thought but there is probably some data that needs to be relegated to microfiche and much that can be archived to tape and stored.
  • Evaluate your Service Level Agreements for data management with your stakeholders. Help them see the value of HSM.
  • Evaluate legal requirements for data storage – does the data need to be easily accessible, or merely accessible in some format? Is there any data – historical mailnotes come to mind – that must be available in the case of a lawsuit, but legally only has to be available only on paper, for the purposes of discovery?
  • Consider the options for HSM tiers of data storage. Here are the most popular.
  1. Tape Backups
  2. Microfiche
  3. Secondary data storage (lower-cost and slower storage)
  4. Printed
  5. Optical Disk
  6. Delete it
  • Explore the HSM options from storage vendors.
  • Publish your HSM policy and ensure the buy–in of the data owners.

We recommend that you start where you can show big storage wins quickly.

Please respond back if you have other solutions and comments that we could add to a future blog.

Tuesday, June 19, 2007

Fun with Linux Filesystem Labels

Most things in life involve a little Give and Take, and filesystems in Linux are certainly no exception.

On the Give side of the equation Linux offers the ability to identify a filesystem not only by its traditional device file name, such as /dev/sde1, but also by a unique label that you can apply to the filesystem.


On the Take side, if you've ever supported a Linux system attached to SAN storage you know that a simple act such as adding or removing a LUN can cause Linux to remap your /dev/sd entries, causing what used to be /dev/sde1 to become /dev/sdd1.


The problem is readily apparent when you reboot the server. Your server is no longer able to locate the /dev/sde1 device in order to mount it at /oracle!

This problem can be solved by using filesystem labels, which are entries in the header structure of an EXT3 filesystem. By using the following command (logged in as root of course)
tune2fs -L oracle /dev/sde1


you can set the label for the filesystem "/dev/sde1" to the keyword "oracle".


While labels allow the use of some special characters like / and _ in their names, I recommend keeping the label simple and indicative of where the fileystem belongs.


To see the results of your change, use
tune2fs -l /dev/sde1   (note the lowercase "L")


Then to make this change effective alter the /etc/fstab entry for /oracle and replace the device file with a special label keyword as shown here:

LABEL=oracle /oracle ext3 noatime 1 1


This approach works for the native Linux EXT3 filesystem as well as for Oracle's OCFS and OCFS2 filesystems.

Friday, June 15, 2007

RDBMS CPU Patch 5901881 Gotcha

I recently ran across this issue while applying the RDBMS CPU patch (5901881) for version 10.2.0.2.

To start off with a little background information:
ORACLE_HOME - /opt/oracle/testdb/10.2.0
Operating System - HP-UX 11.11
Installation - cloned from another ORACLE_HOME

OPatch returned the following error to me while applying patch 5901881:
INFO:Running make for target libnmemso INFO:Start invoking 'make' at Thu May 24 11:23:09 EDT 2007Thu May 24 11:23:09 EDT 2007
INFO:Finish invoking 'make' at Thu May 24 11:23:09 EDT 2007
WARNING:OUI-67200:Make failed to invoke "/usr/ccs/bin/make -f ins_sysman.mk libnmemso ORACLE_HOME=/opt/oracle/testdb/10.2.0"....'ld: Can't find library: "java"

Well as it turns out since this ORACLE_HOME was cloned from another ORACLE_HOME, the file $ORACLE_HOME/sysman/lib/env_sysman.mk does not properly get updated with the new ORACLE_HOME information. You need to edit this file and update the variable JRE_LIB_DIR to your correct ORACLE_HOME.

Since this is a HP-UX environment, the information I am presenting is going to be specific to this platform.

Here is what the entry was prior to me correcting it and the new updated entry:
OLD - JRE_LIB_DIR=/opt/oracle/devdb/10.2.0/jdk/jre/lib/PA_RISC2.0
NEW - JRE_LIB_DIR=/opt/oracle/testdb/10.2.0/jdk/jre/lib/PA_RISC2.0

Once I made the correction, I was able to successfully run opatch apply again. The closest MetaLink note I found was Doc ID: 418557.1 "'/usr/bin/ld: cannot find -ljava' occurs while applying one off patch." For all practical purposes, you should not see this issue unless you are applying a patch that has to relink the libnmemso executable AND are patching a cloned ORACLE_HOME.

Brian Bent, Solution Architect, Solution Beacon

Wednesday, June 13, 2007

BLOGS, BLOGS, BLOGS

Sounds like the title of a bad 50’s sci-fi movie, doesn’t it? Blogs are cool, they are in, everyone is doing them – a defining attribute for our current society. At a macro level, blogs have made the world smaller and with a global audience, the stage is a study in contrasts, from teens baring their souls in online diaries to presidential candidates wooing prospective voters.


Given this societal propensity for blogging, Solution Beacon has stepped on to that global stage with a goal to serve the Oracle community through thought-provoking topics emanating from our vast base of experience. We want to spark conversation, address issues, ask questions and provide a forum where straight talk can be embraced, considered and assimilated into a global Oracle mindshare, if you will.


We will post things that we come across, snippits from white papers or presentations we have prepared, issues that we have encountered, areas that people have interest in but not a lot of documentation exists. In turn we want you to send us the topics that are “in your face” right now – those nagging issues that remain on the corner of your desk, feedback on our Newsletter or this blog, questions on something you’ve read or heard, we’ll take them, raise them to the community and explore the responses that are sure to be very interesting and insightful.


So, take a few minutes and send us your topics – this forum is for you!!

Tuesday, June 12, 2007

Database Growth and Solutions Part II

This is our 2nd part, of three, sessions discussing database growth and the possible solutions to deal with such growth. Just an FYI, this discussion stems from a presentation by John Stouffer, Solution Beacon, and Rich Butterfield of HP. The Database Growth: Problems and Solutions presentation can be found on the Solution Beacon web site at the following link http://www.solutionbeacon.com/ind_pres.htm


Add Capacity through Hardware Upgrades

This isn’t really a solution and reflects that you cannot remove any more data, and you have, typically, exhausted all means of controlling/minimizing your data growth. So now what?


This is when you need to manage with what you have and plan for the future to control where you go; either that, or at least provide enough time to plan for a new career elsewhere.


  • Perform some type of capacity planning effort to see how fast you are growing and how quickly you will outgrow or outperform what you currently have

  • Start tracking data growth
  • Start planning on scaling up with more/faster CPUs and additional RAM, or scaling out, with RAC solutions. You can plan years ahead and stretch the cost to ensure you can keep performance and availability in check

  • Consider as much of the data deletion/purging activities as possible


This doesn’t solve the problem, it just delays the inevitable from occurring.


Decentralize or Do Not Consolidate Data

Think twice before consolidating your databases into one large database. It is sometimes much easier to manage a few small to medium size instances than it is to manage a large growing one. If one database is more or less static, stable and less likely to grow and then you consolidate it with another instance that is growing rapidly and more likely to encounter performance and data issues, you have now caused that stable instance to be in a state of disarray too. You now have twice as many unhappy users. Certainly choosing not to centralize your data may not help control IT resources, costs and overall manageability, but it may be the best solution for your enterprise.


Database Partitioning


Partitioning allows the segregation of data (tables/indexes) into smaller segments, while maintaining a seamless view of all the data as a whole.
Figuring out the best partitioning approach for your tables and indexes can take a considerable amount of analysis, but if implemented correctly, can potentially reap extensive performance gains and potential storage savings.
Older, lightly used partitions can be ported to cheaper, lower end storage solutions. Additionally, depending on the configuration of your partitions, when cloning to other instances, the partitions can be removed, thus reducing the storage needs on target servers.


Partitioning is standard out of the box for only a select set of Oracle Application modules and custom partitioning must have the database partitioning option licensed with Oracle.
Partitioning still doesn’t address the data growth issue and takes a considerable amount of ongoing maintenance and support to maintain the partitions and performance of the partitions.


So, stay tuned for next weeks final posting on Database Growth and Solutions where we will discuss Database Archiving and Hierarchical Storage Management (HSM)

Tuesday, June 5, 2007

Effects of Database Growth and Real Solutions to deal with that Growth

With today’s never ending need to retain every little bit of data, be it to meet regulatory requirements, end-user needs (or desires), business intelligence and trend analysis, overall business growth or just the consolidation of data from various systems, we are trying find solutions to address and compensate for this never-ending data growth.

We see the effects on Applications and databases as a result of keeping all this data, typically through failure to meet service levels as a result of performance degradation, reduced availability, lengthened time to execute backups and recovery, clones, upgrades, as well as the overall TCO just to maintain, support and plan for the constant data growth.

Several questions to be addressed include:

  • Are you keeping the right data for legal and compliance requirements?
  • Are the users getting the data and information they need or losing productivity trying to sort through it all?
  • Is it taking an unacceptable amount of time to retrieve the data or execute processes against the current volume of data?
  • Is the IT staff overwhelmed just trying to keep up with all the requirements and overhead to maintain it all? Aren’t they already working 16 – 24 hour days ?

So, what do we do about it? Get rid of the users? Delete all the data? Rally the government and try and change the legal requirements? Spend a lot of money and effort and just keep growing?

Not likely much of that can occur, so I guess we need to look at real solutions.

Here are some viable solutions that could be considered:

  • Data Deletion/Purging
  • Add Capacity through HW Upgrades
  • Decentralize or Do Not Consolidate Data
  • Database Partitioning
  • Database Archiving
Over the next few weeks we will follow-up discussion on these solutions and what is really feasible for you, today starting with the things you can do in your current infrastructure, with virtually no financial investment.

Data Deletion/Purging
I did say we would have a tough time removing all of the data, but that’s not to say that we can’t remove some of the data, irrelevant data, redundant data or historical data that is no longer required. Apart from your database, take a look at your OS and see where you have a tendency to waste space. This may not be a substantial amount in one day, but over a month, or cloned to several other environments without cleanup, miscellaneous data tends to accumulate in an enterprise-wide environment.

Suggestions:
  • Keep only what you need from the concurrent request output and logs. Do you really need 60 days online, can you live with 30 or less?
  • Put the Concurrent request out/logs onto a cheap storage solution/mount, especially for Production instances, where you are typically running faster, and more expensive high availability type storage. Look for other areas where you can move files to cheaper storage. In some cases, even your archive log destination can be on cheaper storage. Be sure you have some redundancy in place to ensure archive logs are not lost. Can you back up the concurrent request data to the other tables so as not to impact daily OLTP performance.
  • Remove or move those patch files and logs. Go look at your patch repository and admin/log directory and see how much data you have. You’ll be surprised at what you find, especially if you just did a major upgrade. Back up the files and then get rid of them. Ensure you get a backup of all of those patches you applied. Once the patches are superseded it may be difficult to get them again.
  • Remove the need for redundant online backup storage. Depending on your internal infrastructure and backup procedures and requirements, you may be able to run your backups directly to tape if you are currently running backups to disk and then off to tape. If not, then consider consolidating your backup storage to a single SAN/NAS solution so all servers can share the storage instead of each server having its own storage. If considering this option, please make sure this fits into your individual needs and can ensure you meet your backup and recovery requirements.
  • Keep an eye on your support logs generated by alerts, traces, apache, jserv, and forms processes. If they are not cleaning themselves out and/or just making copies of themselves after reboots, you may want to get rid of the old ones.
  • Keep what you clone to a minimum and take over only what you need and, especially, not all of the logs. Reduce your temp and undo space. You, typically, don’t need as large a foot print in your cloned environment as you do in a Production instance. If you have the luxury, subset or purge the data as part of your clone. If you are partitioning, only take the partitions you need. There are many tools out there that can provide you the ability to subset (take only a portion of the data) from your Production instance, as opposed to all the historical data. HP Rim has a great subset product that can remove as much historical data as you want.
  • Keep an eye on temporary and interface tables to make sure they are being cleared out. You may want to develop an alert/trigger to monitor these for you.
  • Monitor your workflow tables and schedule the “Purge Obsolete WorkflowRuntime Data”. Note, if you have not kept up on the purging of your WF tables, after you have run the Purge, you may want to rebuild the tables/indexes, as performance may, initially, be considerably worse. Also take a look at MetaLink Doc Id 277124.1 for good workflow purge items.
  • Take a look at your applications and review what concurrent purge jobs you may want to consider. Many modules have their own purge programs. Try and be proactive and prevent the data growth from happening.
Remember, once the data is deleted, short of a recovery, the data is gone and can not be retrieved. So make sure you know what you are deleting and that the auditors and users are ok with it.