Skip to main content

A Review of Processing Push-down Part 5:

I want to be sure that I’ve conveyed the concepts behind these criteria properly… I may have rushed it in the early parts of this series.
Let’s imagine a query that joins a 2,000,000 row table with a 1000 row dimension table where both live in HDFS.
If all of the data has to be moved from HDFS to the RDBMS then 2,001,000  rows must be read and moved in order to apply a predicate or any other processing.. For fun lets say that the cost of moving this data is 2001K.
If there are 10 parallel pipes then the data movement is completed in one tenth the time… so the cost is 200K.
If a predicate is included that selects only 5% of the data from the big table, and the predicate is pushed down the cost is reduced to 101K. Add in parallel pipes and the cost is 10K
Imagine a query where there is a join between the two tables with predicates on one side and predicate push down… then you have to pay 101K to pull the projected data up and do the join in the RDBMS. If there is a join predicate that reduces the final answer set by another 95% then after the join you return 6K rows. Since everybody returns the same 6K rows as an answer we won’t add that in.
But if you can push the join down as well as the predicates then only 6K rows are moved up… so you can see how 2001K shrinks to 6K through the effective push down of processing.
Further, you can build arbitrarily complex queries and model them pretty well knowing that most of the cost is in data movement.
So think about how Teradata processes these two tables in Hadoop when you use the specialized SQL constructs and then again if you build the query from a BI tool. And stay tuned as I’ll show you how HANA processes the data next…. and then talk about several others.

Comments

Popular posts from this blog

Python and Parquet Performance

In Pandas, PyArrow, fastparquet, AWS Data Wrangler, PySpark and Dask. This post outlines how to use all common Python libraries to read and write Parquet format while taking advantage of  columnar storage ,  columnar compression  and  data partitioning . Used together, these three optimizations can dramatically accelerate I/O for your Python applications compared to CSV, JSON, HDF or other row-based formats. Parquet makes applications possible that are simply impossible using a text format like JSON or CSV. Introduction I have recently gotten more familiar with how to work with  Parquet  datasets across the six major tools used to read and write from Parquet in the Python ecosystem:  Pandas ,  PyArrow ,  fastparquet ,  AWS Data Wrangler ,  PySpark  and  Dask . My work of late in algorithmic trading involves switching between these tools a lot and as I said I often mix up the APIs. I use Pandas and PyArrow for in-RAM comput...

How to construct a File System that lives in Shared Memory.

Shared Memory File System Goals 1. MOUNTED IN SHARED MEMORY The result is a very fast, real time file system. We use Shared Memory so that the file system is public and not private. 2. PERSISTS TO DISK When the file system is unmounted, what happens to it? We need to be able to save the file system so that a system reboot does not destroy it. A great way to achieve this is to save the file system to disk. 3. EXTENSIBLE IN PLACE We want to be able to grow the file system in place. 4. SUPPORTS CONCURRENCY We want multiple users to be able to access the file system at the same time. In fact, we want multiple users to be able to access the same file at the same time. With the goals now in mind we can now talk about the major design issues: FAT File System & Design Issues The  FAT File System  has been around for quite some time. Basically it provides a pretty good file structure. But I have two problems with it: 1. FAT IS NOT EXTENSIBLE IN PLAC...

Fetching Facebook Friends using Windows Azure Mobile Services

This tutorial shows you how to fetch Facebook Friends if you have Facebook accessToken. Here is the the code for Scheduled task called getFriends function getFriends() { //Name of the table where accounts are stored var accountTable = tables.getTable('FacebookAccounts'); //Name of the table where friends are stored var friendsTable = tables.getTable('Friends'); checkAccounts(); function checkAccounts(){ accountTable .read({success: function readAccounts(accounts){ if (accounts.length){ for (var i = 0; i < accounts.length; i++){ console.log("Creating query"); //Call createQuery function for all of the accounts that are found createQuery(accounts[i], getDataFromFacebook); } } else { console.log("Didn't find any account"); prepareAccountTable(); } }}); } function prepareAccountTable(){ var myAccount = { accessToken: "", //enter here you facebook accessToken. You can retrieve ...