Tuesday, March 18, 2014

Storing date&time columns

Sponsored by PRISE Ltd.
www.prisetools.com

How to store date and time info effectively

Introduction

Data Warehouse databases usually contain significant amount of date/time information. Physical modeling technique can seriously influence their storage space and usability.

Aspects

Date/time information can be stored in different ways/data types, each of them will have its own specialities.
Basic options:
  • Joint storage: Timestamp
    • Timestamp(n) , when n means the fractional digits of seconds
  • Separate storage: Date & Time
    • Date column + Time column

Storage space

The data types require the following space (if uncompressed)

Type Space
Date 4 bytes
Integer time (integer format '99:99:99') 4 bytes
Time(n) 6 bytes, independent of n*, where n:[0..6]
Time(n) with time zone 8 bytes, independent of n*, where n:[0..6]
Timestamp(n) 10 bytes, independent of n*, where n:[0..6]
Timestamp(n) with time zone 12 bytes, independent of n*, where n:[0..6]
* n means the precision digits of second

Usage complexity

Teradata is not the most ergonomic for handling date-time data. Operations with these data types are typically tricky and sometimes hides traps (try add_months('2014-01-31',1) ). Conversion of a date and a timestamp is different, decisions must be made by considering storage and usage aspects.
  • Conversions
    • Date: implicit conversions work, easy and comfortable
    • Integer time: works fine, but insert-select will loose the formatting, only the integer value will remain
    • Time(n): implicit conversion to string is not working. This fails: select cast('2014-01-31' as date) || ' ' ||cast('12:00:00' as time(0))
    • Timestamp(n): brrr. Different precisions will not convert automatically either. I don't like it.
  • Filtering: comparing date/datetime values with < / <= /between operators
    • Joint storage (timestamps)
      Straightforward, just use the values - if they are equivalent data types
    • Separate storage
      You have to convert to a "joint" format, either a string or a timestamp before
  • Arithmetic
    • Date: ok, adding a constant, subtracting dates work fine
    • Integer time: do not use arithmetic, results are bad!
    • Time(n): interval types accepted. Not really comfortable, eg max 99 second long interval is accepted (V13.10)
    • Timestamp(n): same as Time(n)
    Regarding arithmetic I suggest building your own UDF library, that will ease your life.

Recommendations

Choosing data type

I recommend to choose data types depending on the table type and usage purposes.
I differentiate "transaction" and "all other" table types, because transaction tables are usually allocate most of the PERM space, while others are many in number, but allocate "negligible" space.
  • Transaction
    • Separate storage
    • Integer time
  • All others
    • Joint type (timestamp)

Saving space - store "delta"

The biggest tables in the data warehouses are the "transaction tables" (call/purchase/transfer/etc. transactions depending on industry), and most of them contain several date fields, most of them w/strong correlation. I explain what I mean. Let's assume a call record (telco), that will have the following date(&time) columns:
  • Channel_seizure
  • Call_start
  • Call_end
  • Bill_cycle_start
  • Bill_cycle_end
The date component of the first three columns are the same in 99% of the records, and the last ones differ from the first ones with max. of 30 days.

My recommendation is the following:
  • Choose a "primary date"
    Must be not null, and typically used as partitioning key also, since it is the most often date filtering condition.In our case this will be the Call_start
  • Choose separate date-time storing
    Eg. Date and Integer time , as this combination requires the least space
  • Store the non-primary dates as delta, multi value comressed
    Compute it in the load process, like this:
    Call_end_delta := Call_end-Call_start
  • Compress the "delta" columns
    They will reflect low deviation, highly compressible, use PRISE Compress Wizard
  • Convert to absolute dates back in the view layer
    Call_start + Call_end_delta as "Call_end"
Example:

CREATE TABLE T2000_CALL_TRX
(
...
Call_start_date Date NOT NULL
Call_end_date_delta Integer COMPRESS (0)
...
) PRIMARY INDEX (...,Call_start_date)
PARTITION BY RANGE_N ( Call_start_date BETWEEN date '2010-01-01' AND date '2020-12-31' EACH interval '1' day, NO RANGE, UNKNOWN);
;

CREATE VIEW V2000_CALL_TRX
as
SELECT
...
, Call_end_date_delta +Call_start_date as "Call_end_date"
...
FROM
T2000_CALL_TRX

;

Sponsored by PRISE Ltd.
www.prisetools.com

Friday, March 07, 2014

How to optimize a Teradata query?

Sponsored by PRISE Ltd.
www.prisetools.com

Teradata SQL optimization techniques

Introduction

The typical goal of an SQL optimization is to get the result (data set) with less computing resources consumed and/or with shorter response time. We can follow several methodologies depending on our experience and studies, but at the end we have to get the answers for the following questions:
  • Is the task really heavy, or just the execution of the query is non-optimal?
  • What is/are the weak point(s) of the query execution?
  • What can I do to make the execution optimal?

Methodologies

The common part of the methodologies that we have to understand - more or less - what is happening during the execution. The more we understand the things behind the scenes the more we can feel the appropriate point of intervention. One can start with the trivial stuff: collect some statistics, make indices, and continue with query rewrite, or even modifying the base table structures.

What is our goal?

First of all we should branch on what do we have to do:
  1. Optimize a specific query that has been running before and we have the execution detail info
    Step details clearly show where were the big resources burnt
  2. In general, optimize the non optimal queries: find them, solve them
    Like a.,but first find those queries, and then solve them one-by-one
  3. Optimize a query, that has no detailed execution info, just the SQL (and "explain")
    Deeper knowledge of the base data and "Teradata way-of-thinking" is required, since no easy and trustworthy resource peak-detecting is available. You have to imagine what will happen, and what can be done better

Optimization in practice

This section describes the case b., and expects available detailed DBQL data.
In this post I will not attach example SQL-s, because I also switched to use PRISE Tuning Assistant for getting all the requested information for performance tuning, instead of writing complex SQL queries and making heaps of paper notes.

Prerequisites

My opinion is that DBQL (DataBase Query Logging) is the fundamental basis of a Teradata system performance management - from SQL optimization point of view. I strongly recommend to switch DBQL comprehensively ON (SQL, Step, Explain, Object are important, excluding XML, that is huge, but actually has not too much extra), and use daily archiving from the online tables - just follow Teradata recommendation.

Finding good candidate queries

DBQL is an excellent source for selecting "low hanging fruits" for performance tuning. The basic rule: we can gain big save on expensive items only, let's focus on the top resource consuming queries first. But what is high resource consumption? I usually check top queries by one or more of these properties:
  • Absolute CPU (CPU totals used by AMPs)
  • Impact CPU (CPU usage corrected by skewness)
  • Absolute I/O (I/O totals used by AMPs)
  • Impact I/O   (Disk I/O usage corrected by skewness)
  • Spool usage
  • Run duration
PRISE Tuning Assistant supplies an easy to use and quick search function for that:



Finding weak point of a query

Examining a query begins with the following steps:
  • Does it have few or many "peak steps", that consume much resources? 
    • Which one(s)?
    • What type of operations are they?
  • Does it have high skewness?
    Bad parallel efficiency, very harmful
  • Does it consume extreme huge spool?
    Compared to other queries...
PRISE Tuning Assistant again.
Check the yellow highlights in the middle, those are the top consuming steps:

    Most of the queries will have one "peak step", that consumes most of the total resources. Typical cases:
    • "Retrieve step" with redistribution
      Large number of rows and/or skewed target spool
    • "Retrieve step" with "duplication-to-all-AMPs"
      Large number of rows duplicated to all AMPs
    • Product join
      Huge number of comparisons: N * M
    • Merge or Hash join
      Skewed base or prepared (spool) data
    • OLAP function
      Large data set or skewed operation
    • Merge step
      Skewness and/or many hash collisions
    • Any kind of step
      Non small, but strongly skewed result

    What can we do?

    Teradata optimizer tries its best when produces the execution plan for a query, however it sometimes lacks proper information or its algorithms are not perfect. We - as humans - may have additional knowledge either of the data or the execution, and we can spoil the optimizer to make better decisions. Let's see our possibilities.
    • Supplement missing / refresh stale statistics
    • Drop disturbing statistics (sometimes occurs...)
    • Restructure the query
    • Break up the query, place part result into volatile table w/ good PI and put statistics on
    • Correct primary index of target / source tables
    • Build secondary/join index/indices
    • Add extra components to the query.
      You may know some additional "easy" filter that lightens the work. Eg. if you know that the join will match for only the last 3 days data of a year-covering table, you can add a date filter, which cost pennies compared to the join.
    • Restrict the result requirements to the real information demand.
      Do the end-user really need that huge amount of data, or just a record of it?

    What should we do?

    First of all, we have to find the root cause(s). Why does that specific top step consume that huge amount or resources or executes so skewed? If we find the cause and eliminate, the problem is usually solved.
    My method is the following:
    1. Find the top consuming step, and determine why it it high consumer
      • Its result is huge
      • Its result is skewed
      • Its work is huge
      • Its input(s) is/are huge
    2. Track the spool flow backwards from the top step, and find
      • Low fidelity results (row count falls far from estimated row count)
      • NO CONFIDENCE steps, specifically w/low fidelity
      • Skewed spool, specifically non small ones
      • Big duplications, specifically w/NO CONFIDENCE
    3. Find the solution
      • Supplement missing statistics, typically on PI, join fields or filter condition
        NO CONFIDENCE, low fidelity, big duplications
      • Break up the query
        Store that part result into a volatile table, where fidelity is very bad, or spool is skewed. Choose a better PI for that
      • Modify PI of the target table
        Slow MERGE step, typical hash-collision problem.
      • Eliminate product joins
      • Decompose large product join-s
      • E.T.C.
    Have a good optimization! :)

    Sponsored by PRISE Ltd.
    www.prisetools.com

    Monday, March 03, 2014

    No more spool space

    Sponsored by PRISE Ltd.
    www.prisetools.com

    Why do I get "No more spool space" error?

    This is the most familiar error message in Teradata world:
    "Failure 2646 No more spool space"

    What does it really mean, what is it caused by?
    Let's get back to the basics.

    What is spool space?

    Spool space is a temporary area, that can be used to store the part-results during query processing, as well as volatile tables. All free space in the database that are not allocated by PERM data, technically can be used for spool area, since a PERM data does not want to allocate that area.

    Each database users may have a "spool limit" that restricts the user to allocate more spool area at a time, than its limit. Keep in mind that all active sessions of a username must share the spool limit together.

    Teradata is a massive parallel system, therefore the spool limit must be interpreted on AMP level:
    Eg: 100AMP system, a user with 10G spool limit means: 100M spool/AMP

    What is spool space limit good for?

    This limitation method is a quite simple way to cut those queries from the system that would suck too much resources. There is no exact relationship between high spool usage and ineffective query, but statistically the correlation is high.
    Practically: a bad query is being kicked off before is consumes too much resources unnecessarily.

    No more spool space scenarios

    System ran out of spool space

    This is the most rare situation, forget about. There are too few free space on the system, but this situation used to be avoided by defining a "SpoolReserve" database, where no objects are created, this way that area is always available for spool.
    If many "big spool limit" users run high spool queries parallel, then this rare situation can yet occure.

    Multiple session of the user are active together

    This is a quite rare situation also. Check the active users from dbc.sessioninfo.

    Volatile tables

    All existing volatile tables reside in your available spool space, reducing the available. If you create many, and even with skewed distribution, you can stuff your spool up. Choose "primary index" carefully, when defining volatile tables also.

    Improper execution plan

    These are the >90% of cases that cause the "No more spool space" errors. Let' see how:
    • "Duplication to all AMPs" of a non-small set of records
      The root cause is typically missing or stale statistics. Either system thinks that much less records will be duplicated than the real (sometimes billions of records get in this kind of spools), or knows this exactly, but on the other branch of the query there are more low quality estimations, and this execution seems to be cheaper. 
    • Redistribution of records by a hash that causes skewed distribution
      Check the corresponding blog post: Accelerate skewed joins
    • Retrieve huge amount of records into spool (locally or redistributed onto the AMPs)
      Specific query structures imply this execution, like: join to a view that "union all"-s big tables.
    I suggest to use PRISE Tuning Assistant to identify what is the problem. It spectacularly displays which execution step falls in the problems above.
    Increasing the spool limit will not solve the problems in the most cases. 

    Too big task

    Sometimes a given SQL query requires big spool area to be performed, even with the best execution plan.
    This is the only case when raising spool limit is the solution. But first you have to understand that the task is really big. PRISE Tuning Assistant is a good tool for identify this in a minute.

    Sponsored by PRISE Ltd.
    www.prisetools.com