Skip to main content

Evaluating Exadata… Does it stack up with RDBMS-Hadoop systems? Part 2

In my earlier blog  I suggested that we could evaluate RDBMS-Hadoop integration architecture using three criteria:
  1. How parallel are the pipes to move data between the RDBMS and the parallel file system;
  2. Is there intelligence to push down predicates; and
  3. Is there more intelligence to push down joins and other relational operators?
But Exadata is a split RDBMS with a parallel file system backing it… how does it measure up by these criteria?
There are effective parallel pipes between the Oracle RAC RDBMS and the Exadata Storage Subsystem… so Exadata passes the first test. Further, Exadata is smart about pushing scan and projection both down to the Storage layer.
Unfortunately there is a fairly severe imbalance between the number of nodes on the RAC side and the number of nodes on the Storage side and this creates a bottleneck. We cannot give Exadata full marks here… but as far as parallel pipes goes it stacks up pretty well.
The ability to push down predicates goes a long way towards solving this as the predicate push-down reduces the amount of data that has to move over the bottleneck. But in every data warehouse there will be queries that return lots of rows from the early execution steps… and Exadata cannot join data in the Storage Subsystem so it tries to pull data up sparingly and push down semi-joins whenever possible… it just cannot be done in every case (Note: in Exadata POCs Oracle will try to ensure that no queries are included that pull lots of data up to the RAC layer… and competitors will try to include queries that expose this weakness…).
So… Oracle also includes some intelligence to push some data down to reduce data movement. There is no way to choose to move data from the RAC layer to the Storage Subsystem and execute the query there… the Storage Subsystem can only scan and project… so again we cannot give Exadata full marks… but it is pretty smart as you will see when we start looking at alternative implementations.
Finally, Exadata cannot effectively split a single query plan across both layers… so no marks at all here.
So Exadata is pretty good… but it has weak spots that will be severe for an important set of DW queries in any implementation.

Comments

Popular posts from this blog

Python and Parquet Performance

In Pandas, PyArrow, fastparquet, AWS Data Wrangler, PySpark and Dask. This post outlines how to use all common Python libraries to read and write Parquet format while taking advantage of  columnar storage ,  columnar compression  and  data partitioning . Used together, these three optimizations can dramatically accelerate I/O for your Python applications compared to CSV, JSON, HDF or other row-based formats. Parquet makes applications possible that are simply impossible using a text format like JSON or CSV. Introduction I have recently gotten more familiar with how to work with  Parquet  datasets across the six major tools used to read and write from Parquet in the Python ecosystem:  Pandas ,  PyArrow ,  fastparquet ,  AWS Data Wrangler ,  PySpark  and  Dask . My work of late in algorithmic trading involves switching between these tools a lot and as I said I often mix up the APIs. I use Pandas and PyArrow for in-RAM comput...

Design of Large-Scale Services on Cloud Services PART 2

Decompose the Application by Workload Applications are typically composed of multiple workloads. Different workloads can, and often do, have different requirements, different levels of criticality to the business, and different levels of financial consideration associated with them. By decomposing an application into workloads, an organization provides itself with valuable flexibility. A workload-centric approach provides better controls over costs, more flexibility in choosing technologies best suited to the workload, workload specific approaches to availability and security, flexibility and agility in adding and deploying new capabilities, etc. Scenarios When thinking about resiliency, it’s sometimes helpful to do so in the context of scenarios. The following are examples of typical scenarios: Scenario 1 – Sports Data Service  A customer provides a data service that provides sports information. The service has two primary workloads. The first provides statistics for th...

Cloud computing: Update

Cloud service contracts are still too complex for many businesses to grasp the potential risks and liabilities,  Businesses are buying into cloud services without fully understanding what they're paying for and what they can expect from the service. "One of the big barriers to using cloud computing is a lack of trust. I think you should be able to know what you're getting and what it means — and it should be easy to ensure that the terms in your contract are reasonable: open, transparent, safe and fair. Even if you don’t have a law degree," "Sensible, plain language contracts" be designed to spell out clear service level agreements and what a businesses' rights are on a range of issues, such as which third parties would be able to access a businesses' information or whether a firm will be notified in the event of data being stolen. Drawing up model contracts for cloud services is a "key pillar" of the  Cloud Computing strategy ....