In general, plain Unix permissions aren’t sufficient when you have permission requirements that don’t map cleanly to an enterprise’s natural hierarchy of users and groups , HDFS ACLs is be available in Apache Hadoop 2.4.0, HDFS ACLs give you the ability to specify fine-grained file permissions for specific named users or named groups, not just the file’s owner and group. HDFS ACLs are modeled after POSIX ACLs , Best practice is to rely on traditional permission bits to implement most permission requirements, and define a smaller number of ACLs to augment the permission bits with a few exceptional rules.
To use ACLs, first we’ll need to enable ACLs on the NameNode by adding the following configuration property to hdfs-site.xml and restarting the NameNode.
Dfs.Namenode.Acls.Enabled
True
Most users will interact with ACLs using 2 new commands added to the HDFS CLI: setfacl and getfacl. For examples of how HDFS ACLs can help implement complex security requirements.
EXAMPLE 1: GRANTING ACCESS TO ANOTHER NAMED GROUP
set an ACL that grants read access to sales-data for members of the execs group.
· Set the ACL.
> Hdfs Dfs -Setfacl -M Group:Execs:R-- /Sales-Data
· Check results by running getfacl.
> Hdfs Dfs -Getfacl /Sales-Data
# File: /Sales-Data
# Owner: Bruce
# Group: Sales
User::Rw-
Group::R--
Group:Execs:R--
Mask::R--
Other::---
· Additionally, the output of ls has been modified to append ‘+’ to the permissions of a file or directory that has an ACL.
> Hdfs Dfs -Ls /Sales-Data
Found 1 Items
-Rw-R-----+ 3 Bruce Sales /Sales-Data
The new ACL entry is added to the existing permissions defined by the permission bits. User bruce has full control as the file owner. Members of either the sales group or the execs group have read access. All others have no access.
EXAMPLE 2: USING A DEFAULT ACL FOR AUTOMATIC APPLICATION TO NEW CHILDREN
In addition to an ACL enforced during permission checks, there is also a separate concept of a default ACL. A default ACL may be applied only to a directory, not a file. Default ACLs have no direct effect on permission checks and instead define the ACL that newly created child files and directories receive automatically.
Suppose we have a monthly-sales-data directory, further sub-divided into separate directories for each month. Let’s set a default ACL to guarantee that members of the execs group automatically get access to new sub-directories, as they get created for each month.
· Set default ACL on parent directory.
> Hdfs Dfs -Setfacl -M Default:Group:Execs:R-X /Monthly-Sales-Data
· Make sub-directories.
> Hdfs Dfs -Mkdir /Monthly-Sales-Data/JAN
> Hdfs Dfs -Mkdir /Monthly-Sales-Data/FEB
· Verify HDFS has automatically applied default ACL to sub-directories.
> Hdfs Dfs -Getfacl -R /Monthly-Sales-Data
# File: /Monthly-Sales-Data
# Owner: Bruce
# Group: Sales
User::Rwx
Group::R-X
Other::---
Default:User::Rwx
Default:Group::R-X
Default:Group:Execs:R-X
Default:Mask::R-X
Default:Other::---
# File: /Monthly-Sales-Data/FEB
# Owner: Bruce
# Group: Sales
User::Rwx
Group::R-X
Group:Execs:R-X
Mask::R-X
Other::---
Default:User::Rwx
Default:Group::R-X
Default:Group:Execs:R-X
Default:Mask::R-X
Default:Other::---
# File: /Monthly-Sales-Data/JAN
# Owner: Bruce
# Group: Sales
User::Rwx
Group::R-X
Group:Execs:R-X
Mask::R-X
Other::---
Default:User::Rwx
Default:Group::R-X
Default:Group:Execs:R-X
Default:Mask::R-X
Default:Other::---
The default ACL is copied from the parent directory to the child file or child directory at time of creation. Subsequent changes to the parent directory’s default ACL do not alter the ACLs of existing children.
EXAMPLE 3: BLOCKING ACCESS TO A SUB-TREE FOR A SPECIFIC USER
Suppose there is an emergency need to block access to an entire sub-tree for a specific user. Applying a named user ACL entry to the root of that sub-tree is the fastest way to accomplish this without accidentally revoking permissions for other users.
· Add ACL entry to block all access to monthly-sales-data by user diana.
> Hdfs Dfs -Setfacl -M User:Diana:--- /Monthly-Sales-Data
· Check results by running getfacl.
> Hdfs Dfs -Getfacl /Monthly-Sales-Data
# File: /Monthly-Sales-Data
# Owner: Bruce
# Group: Sales
User::Rwx
User:Diana:---
Group::R-X
Mask::R-X
Other::---
Default:User::Rwx
Default:Group::R-X
Default:Group:Execs:R-X
Default:Mask::R-X
Default:Other::---
It’s important to keep in mind the order of evaluation for ACL entries when a user attempts to access a file system object:
1. If the user is the file owner, then the owner permission bits are enforced.
2. Else if the user has a named user ACL entry, then those permissions are enforced.
3. Else if the user is a member of the file’s group or any named group in an ACL entry, then the union of permissions for all matching entries are enforced. (The user may be a member of multiple groups.)
4. If none of the above were applicable, then the other permission bits are enforced.
this example, the named user ACL entry accomplished the goal, because the user is not the file owner, and the named user entry takes precedence over all other entries.
DEVELOPMENT
The scope of the effort required coding across multiple layers of HDFS: new APIs, new shell commands, new file system metadata persisted in the NameNode and enhancements to permission enforcement logic. To match implementations on Linux as closely as possible to make the feature familiar and easy to use for system administrators. We also wanted to make sure that ACLs would compose well with other existing HDFS features, like snapshots, the sticky bit and WebHDFS.
In Pandas, PyArrow, fastparquet, AWS Data Wrangler, PySpark and Dask. This post outlines how to use all common Python libraries to read and write Parquet format while taking advantage of columnar storage , columnar compression and data partitioning . Used together, these three optimizations can dramatically accelerate I/O for your Python applications compared to CSV, JSON, HDF or other row-based formats. Parquet makes applications possible that are simply impossible using a text format like JSON or CSV. Introduction I have recently gotten more familiar with how to work with Parquet datasets across the six major tools used to read and write from Parquet in the Python ecosystem: Pandas , PyArrow , fastparquet , AWS Data Wrangler , PySpark and Dask . My work of late in algorithmic trading involves switching between these tools a lot and as I said I often mix up the APIs. I use Pandas and PyArrow for in-RAM comput...
Comments
Post a Comment