With today’s never ending need to retain every little bit of data, be it to meet regulatory requirements, end-user needs (or desires), business intelligence and trend analysis, overall business growth or just the consolidation of data from various systems, we are trying find solutions to address and compensate for this never-ending data growth.
We see the effects on Applications and databases as a result of keeping all this data, typically through failure to meet service levels as a result of performance degradation, reduced availability, lengthened time to execute backups and recovery, clones, upgrades, as well as the overall TCO just to maintain, support and plan for the constant data growth.
Several questions to be addressed include:
- Are you keeping the right data for legal and compliance requirements?
- Are the users getting the data and information they need or losing productivity trying to sort through it all?
- Is it taking an unacceptable amount of time to retrieve the data or execute processes against the current volume of data?
- Is the IT staff overwhelmed just trying to keep up with all the requirements and overhead to maintain it all? Aren’t they already working 16 – 24 hour days ?
So, what do we do about it? Get rid of the users? Delete all the data? Rally the government and try and change the legal requirements? Spend a lot of money and effort and just keep growing?
Not likely much of that can occur, so I guess we need to look at real solutions.
Here are some viable solutions that could be considered:
- Data Deletion/Purging
- Add Capacity through HW Upgrades
- Decentralize or Do Not Consolidate Data
- Database Partitioning
- Database Archiving
Data Deletion/Purging
I did say we would have a tough time removing all of the data, but that’s not to say that we can’t remove some of the data, irrelevant data, redundant data or historical data that is no longer required. Apart from your database, take a look at your OS and see where you have a tendency to waste space. This may not be a substantial amount in one day, but over a month, or cloned to several other environments without cleanup, miscellaneous data tends to accumulate in an enterprise-wide environment.
- Keep only what you need from the concurrent request output and logs. Do you really need 60 days online, can you live with 30 or less?
- Put the Concurrent request out/logs onto a cheap storage solution/mount, especially for Production instances, where you are typically running faster, and more expensive high availability type storage. Look for other areas where you can move files to cheaper storage. In some cases, even your archive log destination can be on cheaper storage. Be sure you have some redundancy in place to ensure archive logs are not lost. Can you back up the concurrent request data to the other tables so as not to impact daily OLTP performance.
- Remove or move those patch files and logs. Go look at your patch repository and admin/log directory and see how much data you have. You’ll be surprised at what you find, especially if you just did a major upgrade. Back up the files and then get rid of them. Ensure you get a backup of all of those patches you applied. Once the patches are superseded it may be difficult to get them again.
- Remove the need for redundant online backup storage. Depending on your internal infrastructure and backup procedures and requirements, you may be able to run your backups directly to tape if you are currently running backups to disk and then off to tape. If not, then consider consolidating your backup storage to a single SAN/NAS solution so all servers can share the storage instead of each server having its own storage. If considering this option, please make sure this fits into your individual needs and can ensure you meet your backup and recovery requirements.
- Keep an eye on your support logs generated by alerts, traces, apache, jserv, and forms processes. If they are not cleaning themselves out and/or just making copies of themselves after reboots, you may want to get rid of the old ones.
- Keep what you clone to a minimum and take over only what you need and, especially, not all of the logs. Reduce your temp and undo space. You, typically, don’t need as large a foot print in your cloned environment as you do in a Production instance. If you have the luxury, subset or purge the data as part of your clone. If you are partitioning, only take the partitions you need. There are many tools out there that can provide you the ability to subset (take only a portion of the data) from your Production instance, as opposed to all the historical data. HP Rim has a great subset product that can remove as much historical data as you want.
- Keep an eye on temporary and interface tables to make sure they are being cleared out. You may want to develop an alert/trigger to monitor these for you.
- Monitor your workflow tables and schedule the “Purge Obsolete WorkflowRuntime Data”. Note, if you have not kept up on the purging of your WF tables, after you have run the Purge, you may want to rebuild the tables/indexes, as performance may, initially, be considerably worse. Also take a look at MetaLink Doc Id 277124.1 for good workflow purge items.
- Take a look at your applications and review what concurrent purge jobs you may want to consider. Many modules have their own purge programs. Try and be proactive and prevent the data growth from happening.
1 comment:
Education is making a successful access to the developing area with the help of technology but I am surprised to see the real solutions of database by visit the site that give the best suggestion. Thank you!
Post a Comment