Skip to content

Commit

Permalink
OpenZFS 8585 - improve batching done in zil_commit()
Browse files Browse the repository at this point in the history
Authored by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: Brad Lewis <brad.lewis@delphix.com>
Reviewed by: Matt Ahrens <mahrens@delphix.com>
Reviewed by: George Wilson <george.wilson@delphix.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Approved by: Dan McDonald <danmcd@joyent.com>
Ported-by: Prakash Surya <prakash.surya@delphix.com>

Problem
=======

The current implementation of zil_commit() can introduce significant
latency, beyond what is inherent due to the latency of the underlying
storage. The additional latency comes from two main problems:

 1. When there's outstanding ZIL blocks being written (i.e. there's
    already a "writer thread" in progress), then any new calls to
    zil_commit() will block waiting for the currently oustanding ZIL
    blocks to complete. The blocks written for each "writer thread" is
    coined a "batch", and there can only ever be a single "batch" being
    written at a time. When a batch is being written, any new ZIL
    transactions will have to wait for the next batch to be written,
    which won't occur until the current batch finishes.

    As a result, the underlying storage may not be used as efficiently
    as possible. While "new" threads enter zil_commit() and are blocked
    waiting for the next batch, it's possible that the underlying
    storage isn't fully utilized by the current batch of ZIL blocks. In
    that case, it'd be better to allow these new threads to generate
    (and issue) a new ZIL block, such that it could be serviced by the
    underlying storage concurrently with the other ZIL blocks that are
    being serviced.

 2. Any call to zil_commit() must wait for all ZIL blocks in its "batch"
    to complete, prior to zil_commit() returning. The size of any given
    batch is proportional to the number of ZIL transaction in the queue
    at the time that the batch starts processing the queue; which
    doesn't occur until the previous batch completes. Thus, if there's a
    lot of transactions in the queue, the batch could be composed of
    many ZIL blocks, and each call to zil_commit() will have to wait for
    all of these writes to complete (even if the thread calling
    zil_commit() only cared about one of the transactions in the batch).

To further complicate the situation, these two issues result in the
following side effect:

 3. If a given batch takes longer to complete than normal, this results
    in larger batch sizes, which then take longer to complete and
    further drive up the latency of zil_commit(). This can occur for a
    number of reasons, including (but not limited to): transient changes
    in the workload, and storage latency irregularites.

Solution
========

The solution attempted by this change has the following goals:

 1. no on-disk changes; maintain current on-disk format.
 2. modify the "batch size" to be equal to the "ZIL block size".
 3. allow new batches to be generated and issued to disk, while there's
    already batches being serviced by the disk.
 4. allow zil_commit() to wait for as few ZIL blocks as possible.
 5. use as few ZIL blocks as possible, for the same amount of ZIL
    transactions, without introducing significant latency to any
    individual ZIL transaction. i.e. use fewer, but larger, ZIL blocks.

In theory, with these goals met, the new allgorithm will allow the
following improvements:

 1. new ZIL blocks can be generated and issued, while there's already
    oustanding ZIL blocks being serviced by the storage.
 2. the latency of zil_commit() should be proportional to the underlying
    storage latency, rather than the incoming synchronous workload.

Porting Notes
=============

Due to the changes made in commit 119a394, the lifetime of an itx
structure differs than in OpenZFS. Specifically, the itx structure is
kept around until the data associated with the itx is considered to be
safe on disk; this is so that the itx's callback can be called after the
data is committed to stable storage. Since OpenZFS doesn't have this itx
callback mechanism, it's able to destroy the itx structure immediately
after the itx is committed to an lwb (before the lwb is written to
disk).

To support this difference, and to ensure the itx's callbacks can still
be called after the itx's data is on disk, a few changes had to be made:

  * A list of itxs was added to the lwb structure. This list contains
    all of the itxs that have been committed to the lwb, such that the
    callbacks for these itxs can be called from zil_lwb_flush_vdevs_done(),
    after the data for the itxs is committed to disk.

  * A list of itxs was added on the stack of the zil_process_commit_list()
    function; the "nolwb_itxs" list. In some circumstances, an itx may
    not be committed to an lwb (e.g. if allocating the "next" ZIL block
    on disk fails), so this list is used to keep track of which itxs
    fall into this state, such that their callbacks can be called after
    the ZIL's writer pipeline is "stalled".

  * The logic to actually call the itx's callback was moved into the
    zil_itx_destroy() function. Since all consumers of zil_itx_destroy()
    were effectively performing the same logic (i.e. if callback is
    non-null, call the callback), it seemed like useful code cleanup to
    consolidate this logic into a single function.

Additionally, the existing Linux tracepoint infrastructure dealing with
the ZIL's probes and structures had to be updated to reflect these code
changes. Specifically:

  * The "zil__cw1" and "zil__cw2" probes were removed, so they had to be
    removed from "trace_zil.h" as well.

  * Some of the zilog structure's fields were removed, which affected
    the tracepoint definitions of the structure.

  * New tracepoints had to be added for the following 3 new probes:
      * zil__process__commit__itx
      * zil__process__normal__itx
      * zil__commit__io__error

OpenZFS-issue: https://www.illumos.org/issues/8585
OpenZFS-commit: openzfs/openzfs@5d95a3a
Closes #6566
  • Loading branch information
Prakash Surya authored and behlendorf committed Dec 5, 2017
1 parent 7b34070 commit 1ce23dc
Show file tree
Hide file tree
Showing 15 changed files with 1,587 additions and 378 deletions.
11 changes: 8 additions & 3 deletions cmd/ztest/ztest.c
Original file line number Diff line number Diff line change
Expand Up @@ -2144,14 +2144,15 @@ ztest_get_done(zgd_t *zgd, int error)
ztest_object_unlock(zd, object);

if (error == 0 && zgd->zgd_bp)
zil_add_block(zgd->zgd_zilog, zgd->zgd_bp);
zil_lwb_add_block(zgd->zgd_lwb, zgd->zgd_bp);

umem_free(zgd, sizeof (*zgd));
umem_free(zzp, sizeof (*zzp));
}

static int
ztest_get_data(void *arg, lr_write_t *lr, char *buf, zio_t *zio)
ztest_get_data(void *arg, lr_write_t *lr, char *buf, struct lwb *lwb,
zio_t *zio)
{
ztest_ds_t *zd = arg;
objset_t *os = zd->zd_os;
Expand All @@ -2166,6 +2167,10 @@ ztest_get_data(void *arg, lr_write_t *lr, char *buf, zio_t *zio)
int error;
ztest_zgd_private_t *zgd_private;

ASSERT3P(lwb, !=, NULL);
ASSERT3P(zio, !=, NULL);
ASSERT3U(size, !=, 0);

ztest_object_lock(zd, object, RL_READER);
error = dmu_bonus_hold(os, object, FTAG, &db);
if (error) {
Expand All @@ -2186,7 +2191,7 @@ ztest_get_data(void *arg, lr_write_t *lr, char *buf, zio_t *zio)
db = NULL;

zgd = umem_zalloc(sizeof (*zgd), UMEM_NOFAIL);
zgd->zgd_zilog = zd->zd_zilog;
zgd->zgd_lwb = lwb;
zgd_private = umem_zalloc(sizeof (ztest_zgd_private_t), UMEM_NOFAIL);
zgd_private->z_zd = zd;
zgd_private->z_object = object;
Expand Down
2 changes: 1 addition & 1 deletion include/sys/dmu.h
Original file line number Diff line number Diff line change
Expand Up @@ -982,7 +982,7 @@ uint64_t dmu_tx_get_txg(dmu_tx_t *tx);
* {zfs,zvol,ztest}_get_done() args
*/
typedef struct zgd {
struct zilog *zgd_zilog;
struct lwb *zgd_lwb;
struct blkptr *zgd_bp;
dmu_buf_t *zgd_db;
struct rl *zgd_rl;
Expand Down
234 changes: 163 additions & 71 deletions include/sys/trace_zil.h
Original file line number Diff line number Diff line change
Expand Up @@ -33,89 +33,181 @@
#include <linux/tracepoint.h>
#include <sys/types.h>

#define ZILOG_TP_STRUCT_ENTRY \
__field(uint64_t, zl_lr_seq) \
__field(uint64_t, zl_commit_lr_seq) \
__field(uint64_t, zl_destroy_txg) \
__field(uint64_t, zl_replaying_seq) \
__field(uint32_t, zl_suspend) \
__field(uint8_t, zl_suspending) \
__field(uint8_t, zl_keep_first) \
__field(uint8_t, zl_replay) \
__field(uint8_t, zl_stop_sync) \
__field(uint8_t, zl_logbias) \
__field(uint8_t, zl_sync) \
__field(int, zl_parse_error) \
__field(uint64_t, zl_parse_blk_seq) \
__field(uint64_t, zl_parse_lr_seq) \
__field(uint64_t, zl_parse_blk_count) \
__field(uint64_t, zl_parse_lr_count) \
__field(uint64_t, zl_cur_used) \
__field(clock_t, zl_replay_time) \
__field(uint64_t, zl_replay_blks)

#define ZILOG_TP_FAST_ASSIGN \
__entry->zl_lr_seq = zilog->zl_lr_seq; \
__entry->zl_commit_lr_seq = zilog->zl_commit_lr_seq; \
__entry->zl_destroy_txg = zilog->zl_destroy_txg; \
__entry->zl_replaying_seq = zilog->zl_replaying_seq; \
__entry->zl_suspend = zilog->zl_suspend; \
__entry->zl_suspending = zilog->zl_suspending; \
__entry->zl_keep_first = zilog->zl_keep_first; \
__entry->zl_replay = zilog->zl_replay; \
__entry->zl_stop_sync = zilog->zl_stop_sync; \
__entry->zl_logbias = zilog->zl_logbias; \
__entry->zl_sync = zilog->zl_sync; \
__entry->zl_parse_error = zilog->zl_parse_error; \
__entry->zl_parse_blk_seq = zilog->zl_parse_blk_seq; \
__entry->zl_parse_lr_seq = zilog->zl_parse_lr_seq; \
__entry->zl_parse_blk_count = zilog->zl_parse_blk_count;\
__entry->zl_parse_lr_count = zilog->zl_parse_lr_count; \
__entry->zl_cur_used = zilog->zl_cur_used; \
__entry->zl_replay_time = zilog->zl_replay_time; \
__entry->zl_replay_blks = zilog->zl_replay_blks;

#define ZILOG_TP_PRINTK_FMT \
"zl { lr_seq %llu commit_lr_seq %llu destroy_txg %llu " \
"replaying_seq %llu suspend %u suspending %u keep_first %u " \
"replay %u stop_sync %u logbias %u sync %u " \
"parse_error %u parse_blk_seq %llu parse_lr_seq %llu " \
"parse_blk_count %llu parse_lr_count %llu " \
"cur_used %llu replay_time %lu replay_blks %llu }"

#define ZILOG_TP_PRINTK_ARGS \
__entry->zl_lr_seq, __entry->zl_commit_lr_seq, \
__entry->zl_destroy_txg, __entry->zl_replaying_seq, \
__entry->zl_suspend, __entry->zl_suspending, \
__entry->zl_keep_first, __entry->zl_replay, \
__entry->zl_stop_sync, __entry->zl_logbias, __entry->zl_sync, \
__entry->zl_parse_error, __entry->zl_parse_blk_seq, \
__entry->zl_parse_lr_seq, __entry->zl_parse_blk_count, \
__entry->zl_parse_lr_count, __entry->zl_cur_used, \
__entry->zl_replay_time, __entry->zl_replay_blks

#define ITX_TP_STRUCT_ENTRY \
__field(itx_wr_state_t, itx_wr_state) \
__field(uint8_t, itx_sync) \
__field(zil_callback_t, itx_callback) \
__field(void *, itx_callback_data) \
__field(uint64_t, itx_oid) \
\
__field(uint64_t, lrc_txtype) \
__field(uint64_t, lrc_reclen) \
__field(uint64_t, lrc_txg) \
__field(uint64_t, lrc_seq)

#define ITX_TP_FAST_ASSIGN \
__entry->itx_wr_state = itx->itx_wr_state; \
__entry->itx_sync = itx->itx_sync; \
__entry->itx_callback = itx->itx_callback; \
__entry->itx_callback_data = itx->itx_callback_data; \
__entry->itx_oid = itx->itx_oid; \
\
__entry->lrc_txtype = itx->itx_lr.lrc_txtype; \
__entry->lrc_reclen = itx->itx_lr.lrc_reclen; \
__entry->lrc_txg = itx->itx_lr.lrc_txg; \
__entry->lrc_seq = itx->itx_lr.lrc_seq;

#define ITX_TP_PRINTK_FMT \
"itx { wr_state %u sync %u callback %p callback_data %p oid %llu" \
" { txtype %llu reclen %llu txg %llu seq %llu } }"

#define ITX_TP_PRINTK_ARGS \
__entry->itx_wr_state, __entry->itx_sync, __entry->itx_callback,\
__entry->itx_callback_data, __entry->itx_oid, \
__entry->lrc_txtype, __entry->lrc_reclen, __entry->lrc_txg, \
__entry->lrc_seq

#define ZCW_TP_STRUCT_ENTRY \
__field(lwb_t *, zcw_lwb) \
__field(boolean_t, zcw_done) \
__field(int, zcw_zio_error) \

#define ZCW_TP_FAST_ASSIGN \
__entry->zcw_lwb = zcw->zcw_lwb; \
__entry->zcw_done = zcw->zcw_done; \
__entry->zcw_zio_error = zcw->zcw_zio_error;

#define ZCW_TP_PRINTK_FMT \
"zcw { lwb %p done %u error %u }"

#define ZCW_TP_PRINTK_ARGS \
__entry->zcw_lwb, __entry->zcw_done, __entry->zcw_zio_error

/*
* Generic support for one argument tracepoints of the form:
* Generic support for two argument tracepoints of the form:
*
* DTRACE_PROBE1(...,
* zilog_t *, ...);
* DTRACE_PROBE2(...,
* zilog_t *, ...,
* itx_t *, ...);
*/
/* BEGIN CSTYLED */
DECLARE_EVENT_CLASS(zfs_zil_class,
TP_PROTO(zilog_t *zilog),
TP_ARGS(zilog),
DECLARE_EVENT_CLASS(zfs_zil_process_itx_class,
TP_PROTO(zilog_t *zilog, itx_t *itx),
TP_ARGS(zilog, itx),
TP_STRUCT__entry(
__field(uint64_t, zl_lr_seq)
__field(uint64_t, zl_commit_lr_seq)
__field(uint64_t, zl_destroy_txg)
__field(uint64_t, zl_replaying_seq)
__field(uint32_t, zl_suspend)
__field(uint8_t, zl_suspending)
__field(uint8_t, zl_keep_first)
__field(uint8_t, zl_replay)
__field(uint8_t, zl_stop_sync)
__field(uint8_t, zl_writer)
__field(uint8_t, zl_logbias)
__field(uint8_t, zl_sync)
__field(int, zl_parse_error)
__field(uint64_t, zl_parse_blk_seq)
__field(uint64_t, zl_parse_lr_seq)
__field(uint64_t, zl_parse_blk_count)
__field(uint64_t, zl_parse_lr_count)
__field(uint64_t, zl_next_batch)
__field(uint64_t, zl_com_batch)
__field(uint64_t, zl_cur_used)
__field(clock_t, zl_replay_time)
__field(uint64_t, zl_replay_blks)
ZILOG_TP_STRUCT_ENTRY
ITX_TP_STRUCT_ENTRY
),
TP_fast_assign(
__entry->zl_lr_seq = zilog->zl_lr_seq;
__entry->zl_commit_lr_seq = zilog->zl_commit_lr_seq;
__entry->zl_destroy_txg = zilog->zl_destroy_txg;
__entry->zl_replaying_seq = zilog->zl_replaying_seq;
__entry->zl_suspend = zilog->zl_suspend;
__entry->zl_suspending = zilog->zl_suspending;
__entry->zl_keep_first = zilog->zl_keep_first;
__entry->zl_replay = zilog->zl_replay;
__entry->zl_stop_sync = zilog->zl_stop_sync;
__entry->zl_writer = zilog->zl_writer;
__entry->zl_logbias = zilog->zl_logbias;
__entry->zl_sync = zilog->zl_sync;
__entry->zl_parse_error = zilog->zl_parse_error;
__entry->zl_parse_blk_seq = zilog->zl_parse_blk_seq;
__entry->zl_parse_lr_seq = zilog->zl_parse_lr_seq;
__entry->zl_parse_blk_count = zilog->zl_parse_blk_count;
__entry->zl_parse_lr_count = zilog->zl_parse_lr_count;
__entry->zl_next_batch = zilog->zl_next_batch;
__entry->zl_com_batch = zilog->zl_com_batch;
__entry->zl_cur_used = zilog->zl_cur_used;
__entry->zl_replay_time = zilog->zl_replay_time;
__entry->zl_replay_blks = zilog->zl_replay_blks;
ZILOG_TP_FAST_ASSIGN
ITX_TP_FAST_ASSIGN
),
TP_printk("zl { lr_seq %llu commit_lr_seq %llu destroy_txg %llu "
"replaying_seq %llu suspend %u suspending %u keep_first %u "
"replay %u stop_sync %u writer %u logbias %u sync %u "
"parse_error %u parse_blk_seq %llu parse_lr_seq %llu "
"parse_blk_count %llu parse_lr_count %llu next_batch %llu "
"com_batch %llu cur_used %llu replay_time %lu replay_blks %llu }",
__entry->zl_lr_seq, __entry->zl_commit_lr_seq,
__entry->zl_destroy_txg, __entry->zl_replaying_seq,
__entry->zl_suspend, __entry->zl_suspending, __entry->zl_keep_first,
__entry->zl_replay, __entry->zl_stop_sync, __entry->zl_writer,
__entry->zl_logbias, __entry->zl_sync, __entry->zl_parse_error,
__entry->zl_parse_blk_seq, __entry->zl_parse_lr_seq,
__entry->zl_parse_blk_count, __entry->zl_parse_lr_count,
__entry->zl_next_batch, __entry->zl_com_batch, __entry->zl_cur_used,
__entry->zl_replay_time, __entry->zl_replay_blks)
TP_printk(
ZILOG_TP_PRINTK_FMT " " ITX_TP_PRINTK_FMT,
ZILOG_TP_PRINTK_ARGS, ITX_TP_PRINTK_ARGS)
);
/* END CSTYLED */

/* BEGIN CSTYLED */
#define DEFINE_ZIL_EVENT(name) \
DEFINE_EVENT(zfs_zil_class, name, \
TP_PROTO(zilog_t *zilog), \
TP_ARGS(zilog))
DEFINE_ZIL_EVENT(zfs_zil__cw1);
DEFINE_ZIL_EVENT(zfs_zil__cw2);
#define DEFINE_ZIL_PROCESS_ITX_EVENT(name) \
DEFINE_EVENT(zfs_zil_process_itx_class, name, \
TP_PROTO(zilog_t *zilog, itx_t *itx), \
TP_ARGS(zilog, itx))
DEFINE_ZIL_PROCESS_ITX_EVENT(zfs_zil__process__commit__itx);
DEFINE_ZIL_PROCESS_ITX_EVENT(zfs_zil__process__normal__itx);
/* END CSTYLED */

/*
* Generic support for two argument tracepoints of the form:
*
* DTRACE_PROBE2(...,
* zilog_t *, ...,
* zil_commit_waiter_t *, ...);
*/
/* BEGIN CSTYLED */
DECLARE_EVENT_CLASS(zfs_zil_commit_io_error_class,
TP_PROTO(zilog_t *zilog, zil_commit_waiter_t *zcw),
TP_ARGS(zilog, zcw),
TP_STRUCT__entry(
ZILOG_TP_STRUCT_ENTRY
ZCW_TP_STRUCT_ENTRY
),
TP_fast_assign(
ZILOG_TP_FAST_ASSIGN
ZCW_TP_FAST_ASSIGN
),
TP_printk(
ZILOG_TP_PRINTK_FMT " " ZCW_TP_PRINTK_FMT,
ZILOG_TP_PRINTK_ARGS, ZCW_TP_PRINTK_ARGS)
);

/* BEGIN CSTYLED */
#define DEFINE_ZIL_COMMIT_IO_ERROR_EVENT(name) \
DEFINE_EVENT(zfs_zil_commit_io_error_class, name, \
TP_PROTO(zilog_t *zilog, zil_commit_waiter_t *zcw), \
TP_ARGS(zilog, zcw))
DEFINE_ZIL_COMMIT_IO_ERROR_EVENT(zfs_zil__commit__io__error);
/* END CSTYLED */

#endif /* _TRACE_ZIL_H */
Expand Down
8 changes: 6 additions & 2 deletions include/sys/zil.h
Original file line number Diff line number Diff line change
Expand Up @@ -40,6 +40,7 @@ extern "C" {

struct dsl_pool;
struct dsl_dataset;
struct lwb;

/*
* Intent log format:
Expand Down Expand Up @@ -140,6 +141,7 @@ typedef enum zil_create {
/*
* Intent log transaction types and record structures
*/
#define TX_COMMIT 0 /* Commit marker (no on-disk state) */
#define TX_CREATE 1 /* Create file */
#define TX_MKDIR 2 /* Make directory */
#define TX_MKXATTR 3 /* Make XATTR directory */
Expand Down Expand Up @@ -465,7 +467,8 @@ typedef int zil_parse_blk_func_t(zilog_t *zilog, blkptr_t *bp, void *arg,
typedef int zil_parse_lr_func_t(zilog_t *zilog, lr_t *lr, void *arg,
uint64_t txg);
typedef int zil_replay_func_t(void *arg1, void *arg2, boolean_t byteswap);
typedef int zil_get_data_t(void *arg, lr_write_t *lr, char *dbuf, zio_t *zio);
typedef int zil_get_data_t(void *arg, lr_write_t *lr, char *dbuf,
struct lwb *lwb, zio_t *zio);

extern int zil_parse(zilog_t *zilog, zil_parse_blk_func_t *parse_blk_func,
zil_parse_lr_func_t *parse_lr_func, void *arg, uint64_t txg,
Expand Down Expand Up @@ -503,7 +506,8 @@ extern void zil_clean(zilog_t *zilog, uint64_t synced_txg);
extern int zil_suspend(const char *osname, void **cookiep);
extern void zil_resume(void *cookie);

extern void zil_add_block(zilog_t *zilog, const blkptr_t *bp);
extern void zil_lwb_add_block(struct lwb *lwb, const blkptr_t *bp);
extern void zil_lwb_add_txg(struct lwb *lwb, uint64_t txg);
extern int zil_bp_tree_add(zilog_t *zilog, const blkptr_t *bp);

extern void zil_set_sync(zilog_t *zilog, uint64_t syncval);
Expand Down
Loading

0 comments on commit 1ce23dc

Please sign in to comment.