Skip to main content

Hadoop sharp edges annoy

1. Pig vs. Hive

You cannot use Hive UDFs in Pig. You have to use HCatalog to access Hive tables in Pig. You cannot use Pig UDFs in Hive. Whether it's one little extra functionality I need while in Hive, but don’t really feel like writing a full-on Pig script or it's the “gee, I could easily do this if I were just in Hive” while I’m writing Pig scripts, I frequently think, “Tear down this wall!” when I’m writing in either.

2. Being forced to store all my shared libraries in HDFS

This is a recurring theme in Hadoop. If you store your Pig script on HDFS, then it automatically assumes any JAR files will be there as well (I’m working on fixing that myself). This general theme repeats in Oozie and other tools. It's usually sensible, but at times, having an organization-wide forced shared library version is painful. Besides, more than half the time, these are the same JAR files you installed everywhere you installed the client, so why store them twice?

3. Oozie

Debugging you is not fun, so the docs have lots of examples with the old schema. When you get an error, it usually has nothing to do with whatever you did wrong. It may be a “protocol error” for a configuration typo or a schema validation error for a schema that validates using the schema validator but fails on the server. To a great degree, Oozie is like Ant or Maven, except distributed, with no tooling and a bit brittle.

4. Error messages

You’re joking, right? Speaking of error messages. My favorite is the one where any of the Hadoop tools say, “failure, no error returned,” which translates to “something happened, good luck finding it.”

5. Kerberos

If you want to secure Hadoop in a way that was relatively thought out, you get to use Kerberos. Remember Kerberos and how much fun and antiquated it is? So you go straight LDAP, except that nothing in Hadoop is integrated: no single sign-on, no SAML, no OAuth, and nothing passes the credentials around (instead, it re-authenticates and re-authorizes). Even more fun, each part of the Hadoop ecosystem wrote its own LDAP support, so it's inconsistent.

6. Knox

Because writing a proper LDAP connector needs to be done at least 100 more times in Java before we get it right. Gosh, go look at that code. It doesn’t really pool connections properly. In fact, I kind of think Knox was created out of a zeal for Java or something. You could do the same with a well-written Apache config, mod_proxy, mod_rewrite. In fact, that's basically what Knox is, except in Java. To boot, after it authenticates and authorizes, it doesn’t pass the information on to Hive or WebHDFS or whatever you’re accessing, and gets to do it again.

7. Hive won't let me have my external table and delete it too

If you let Hive manage tables, it automatically deletes them if you drop the table. If you have an external table, it does not. Why can’t there be a “drop external table too” or something? Why do I have to do this outside if I really want to? Also, while Hive is practically evolving into an RDBMS, why doesn’t it have Update and Delete?

8. Namenode fail

Oozie, Knox, and several other parts of Hadoop do not obey the new Namenode HA stuff. You can have HA Hadoop, so long as you don’t use anything else with it.

9. Documentation

It’s cliche to complain, but check this out. Line 37 is wrong -- worse, it is wrong in every post all over the Internet. This proves that no one even bothered to run the example before checking it in. The Oozie documentation is even more dreadful, and most of the examples won’t pass schema validation on the version it's meant for.

10. Ambari coverage

I have trouble criticizing Ambari; given what I know about Hadoop architecture, it's amazing Ambari works at all. That said, where Ambari has shortcomings, they can be annoying. For example, Ambari doesn’t install -- or in some cases, doesn’t install correctly -- many items, including various HA settings, Knox, and much, much more. I’m sure it will get better, but “manually install afterward” or “we’ll have to create a puppet script for the rest” shouldn’t appear in my emails or documentation any longer.

11. Repository management

Speaking of Ambari, have you ever done an install while the Repositories were being upgraded? I have -- it does not behave well. In fact, sometimes it finds the fastest (and most out of date) mirror. It doesn't care if what it pulls down is in any way compatible. You can configure your way out of that part, but it's still annoying the first time you install incoherent pieces of Hadoop across a few hundred nodes.

12. Null pointer exceptions

I seem to find them. Often they are parse errors or other faults I've caused. That said, they still should not be exposed as NPEs in Pig, Hive, HDFS, and so on.
The response to any similar list of complaints will of course be “patches welcome!” or "hey, I’m working on it." Hadoop has come a long way and is certainly one of my favorite tools, but boy, those sharp edges annoy me.

Comments

Popular posts from this blog

Python and Parquet Performance

In Pandas, PyArrow, fastparquet, AWS Data Wrangler, PySpark and Dask. This post outlines how to use all common Python libraries to read and write Parquet format while taking advantage of  columnar storage ,  columnar compression  and  data partitioning . Used together, these three optimizations can dramatically accelerate I/O for your Python applications compared to CSV, JSON, HDF or other row-based formats. Parquet makes applications possible that are simply impossible using a text format like JSON or CSV. Introduction I have recently gotten more familiar with how to work with  Parquet  datasets across the six major tools used to read and write from Parquet in the Python ecosystem:  Pandas ,  PyArrow ,  fastparquet ,  AWS Data Wrangler ,  PySpark  and  Dask . My work of late in algorithmic trading involves switching between these tools a lot and as I said I often mix up the APIs. I use Pandas and PyArrow for in-RAM comput...

How to construct a File System that lives in Shared Memory.

Shared Memory File System Goals 1. MOUNTED IN SHARED MEMORY The result is a very fast, real time file system. We use Shared Memory so that the file system is public and not private. 2. PERSISTS TO DISK When the file system is unmounted, what happens to it? We need to be able to save the file system so that a system reboot does not destroy it. A great way to achieve this is to save the file system to disk. 3. EXTENSIBLE IN PLACE We want to be able to grow the file system in place. 4. SUPPORTS CONCURRENCY We want multiple users to be able to access the file system at the same time. In fact, we want multiple users to be able to access the same file at the same time. With the goals now in mind we can now talk about the major design issues: FAT File System & Design Issues The  FAT File System  has been around for quite some time. Basically it provides a pretty good file structure. But I have two problems with it: 1. FAT IS NOT EXTENSIBLE IN PLAC...

Fetching Facebook Friends using Windows Azure Mobile Services

This tutorial shows you how to fetch Facebook Friends if you have Facebook accessToken. Here is the the code for Scheduled task called getFriends function getFriends() { //Name of the table where accounts are stored var accountTable = tables.getTable('FacebookAccounts'); //Name of the table where friends are stored var friendsTable = tables.getTable('Friends'); checkAccounts(); function checkAccounts(){ accountTable .read({success: function readAccounts(accounts){ if (accounts.length){ for (var i = 0; i < accounts.length; i++){ console.log("Creating query"); //Call createQuery function for all of the accounts that are found createQuery(accounts[i], getDataFromFacebook); } } else { console.log("Didn't find any account"); prepareAccountTable(); } }}); } function prepareAccountTable(){ var myAccount = { accessToken: "", //enter here you facebook accessToken. You can retrieve ...