Skip to content
cjyu edited this page Jan 24, 2013 · 16 revisions

Azkaban2 is designed to make a number of important functionalities plugins, so that
a) they can be selectively installed/upgraded in different environment without changing the core Azkaban2, and
b) it makes Azkaban2 very easy to be extended for different systems.

These plugins handle important and essential functionalities of Azkaban2 and we recommend installing them along with Azkaban2.

Currently we have the following plugins:

  • HDFS filesystem browser plugin.

  • jobtype plugin, in azkaban-plugins/plugins/jobtype
    The default jobtypes plugin bundle enables 'java' job and 'pig' job, which are able to run mapreduce jobs and pig jobs on a hadoop cluster.

To Install These Plugins
Get the azkaban-plugins, either by download the tar ball or check out from Github.
Untar it to my-azkaban-plugins directory.

Installing jobtypes plugins : 1. Build the bundle:
Go to
azkaban-plugins/plugins/jobtype
do
ant package
This should create a install tar ball in
my-azkaban-plugins/dist/jobtype/packages/azkaban-jobtype*.tar.gz
2. Set in Azkaban2 config file where you want Azkaban2 to load the plugin jobtypes:
Azkaban2-executor-server-install-directory/conf/azkaban.properties : azkaban.jobtype.plugin.dir=where-my-jobtype-plugin-go
If you don't set it, it is default to 'plugins/jobtypes'
3. Place the jobtype plugin tar file in 'plugins' directory, untar it, rename it jobtypes or the
name you set in step 2.
4. The 'java' and 'pig' job types in the default package work with hadoop clusters, so you should edit the jobtypes/commonprivate.properties file
and put in necessary hadoop cluster security settings. Here are the settings you will likely use:

   hadoop.security.manager.class : the class to issue hadoop tokens or get proxy users. The one enclosed is for Hadoop 1.x versions. One should choose one that works with specific hadoop installation and security settings. 
   azkaban.should.proxy          : whether or not azkaban should proxy as a user. default should be on. otherwise, all hadoop jobs will be run as user azkaban.
   proxy.keytab.location         : kerberos keytab location
   proxy.user                    : proxy user for azkaban in hadoop settings
   jobtype.global.classpath      : jars in this list will be inherited by all job types
   jobtype.global.jvm.args       : settings in this list will be inherited by all job types          
  1. Start the Azkaban2 executor server. See to that it is able to load the plugin jobtypes correctly by checking the log. One should also run mapreduce/pig jobs to validate correct installation before releasing Azkaban2 to the users.

Installing hsfs browser plugins :

Clone this wiki locally