Five Quick Fixes for Database Reporting Slowdowns

By | In Blog, Database Administration | August 04th, 2014

Databases are the cornerstone of any business, storing every important detail that keeps them operating. They behave as living things, growing in a way that seems very organic. And like living things, databases require maintenance and care in order to remain effective and functional.

As your database grows, so do your demands on the data within it. As a skilled SQL tuner, I was recently interviewed by Software Advice, a business intelligence systems reviewer, to put together five quick fixes for common database reporting slowdowns:

  1. Purge Nonessential Data
    • Deciding which data you need to hang on to can be an overwhelming task. I recommend that you begin by examining your data retention policy and identifying guidelines that make sense for your organization.To be clear, Purging is removing data so be sure to consider whether the data will be required for daily, quarterly, or annual reporting. You may be able to archive data that is used infrequently, or create secondary databases to hold calculated values based on the original records. It is probable you’ll need to consider your companies retention policy.
  2. Create Indices to Increase Query Efficiency
    • One of the easiest ways to improve data access is to explain plans for places where indexes can help data retrieval. Also spending some time doing a review on indexes that could be causing performance issues. While it may seem insignificant, even fractions of a second can make a difference when you are evaluating the time it takes to execute individual database queries. Adding an index might not always be the right answer so you’ll need to do your research, testing, and validation to ensure the index you’re considering adding will bring the benefit you’re looking for.
  3. Stabilize Plans That Keep Changing
    • Execution plans change for many reasons and sometimes you might not ever find the root cause, but often it’s changing for obvious reasons. I’ve seen a good plan terminate because the workload snapshots expired just before that monthly report ran, and all of a sudden a new plan was generated that caused it to run poorly. Changing the SQL just enough will cause a new plan to change. Altering a bind variable, date range, or potentially the order of qualifying filters could cause the optimizer decide to take a completely different route.Run an ASH report on the SQL ID in question and see how many plan hashes you find. Look back in the AWR snapshots, as far back as you can go, and again count the number of plan hash values. If you’re finding several you have an unstable SQL plan. The fastest way to address that execution is to run a Tuning Task on the SQL ID and see if you can have the DBMS Tune Package generate a SQL Profile for you. That will stabilize the plan and possible offer other tuning ideas to take it further.
  4. Organize Data Into Partitions
    • Sometimes purging and adding indices isn’t enough to improve performance. In these cases, partitions may help.Vertical partitions reduce the width of a table; a task that is accomplished by grouping frequently accessed columns together.Horizontal partitions group records by row, keeping results together by a common attribute (consider combining sales figures by year or by the salesperson involved).
  5. When You Can’t Partition Shard
    • Sharding takes horizontal partitioning a step further by breaking data structures across multiple servers. While it takes considerable planning to execute elegantly, an example of sharding may see you breaking a customer table containing millions of rows into multiple databases; perhaps broken down alphabetically (those with last names beginning with A-M in one location, and N-Z in another).While sharding does require additional hardware and software resources, it allows for queries to run simultaneous lookups and then lace the results together once they are completed.

No matter your approach, it is important to be forward thinking; the adage that an ounce of prevention is worth a pound of cure applies nicely to data storage.

For help with reporting slowdowns, get in touch with us at Datavail – we’d be glad to be of assistance.

Contact Us
Chuck Ezell
Vice President – Data Integration Practice
With almost 20 years of development experience, Chuck Ezell leads the Development, Tuning and Automation Practice (DTA). Bringing his knowledge from supporting some of the world's largest infrastructures in database & application performance tuning. With a broad background in process automation, database application development, optimizing database performance and many DevOps tools. Although he and his team excel at optimizing and customizing Oracle systems, Chuck works with .NET, T-SQL, C#, PHP, javascript, Java, ANSI SQL, PL/SQL, APEX, and many other languages on systems by Oracle, HP, IBM, Linux and Windows. He uses SSMS, Oracle OEM, AppDynamics, Splunk, Visual VM along with many others, for performance tuning operations.

Leave a Reply

Your email address will not be published.
Required fields are marked (*).