Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP #2

Open
wants to merge 511 commits into
base: master2
Choose a base branch
from
Open

WIP #2

wants to merge 511 commits into from

Conversation

skiselkov
Copy link
Owner

ZFS scrub/resilver take excessively long due to issuing lots of random IO

@@ -57,6 +57,10 @@

#include "statcommon.h"

#ifndef MAX
#define MAX(x, y) ((x) > (y) ? (x) : (y))
#endif /* MAX */
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we get this from some header file?

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No idea why I didn't find it on my first search, turns out it sits in <sys/sysmacros.h> and isn't ifdef'ed to _KERNEL, so I'll pull it in from there.

set to \fBdisabled\fR, scrub and resilver process data in logical object
block order - this is analogous to opening a file and simply reading it
from start to finish in sequence. This approach is sensitive to how well
sequentially the data is layed out on the pool. If the data is
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can remove well.

typo: laid

or host system reboot. Instead, the algorithm takes "checkpoints" at
approximately 1 hour intervals. If the pool is exported or the host
system reboots, the operation will be resumed from the last of these
checkpoints.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is great info. I think we should copy or move the description of scrubbing to the zpool manpage (perhaps in the section on the zpool scrub subcommand). Keep in mind that the zpool-features manpage is primarily for understanding when/why the feature should be enabled. Users that already have the feature enabled are unlikely to visit this manpage to understand scrubbing (and in several years, this will be most users).

* objects of at least sizeof (range_seg_t). The range tree will use
* the start of that object as a range_seg_t to keep its internal
* data structures and you can use the remainder of the object to
* store arbitrary additional fields as necessary.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

dictating that it be at the start is a little restrictive. I think it would be more general if we could pass in the offset to the range_set_t instead.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will be removed with the integration of ss_fill tracking into range_tree itself.

* allocations. This is useful for cases when close proximity of
* allocations is an important detail that needs to be represented
* in the range tree. See range_tree_set_gap(). The default behavior
* is not to bridge gaps (i.e. the maximum allowed gap size is 0).
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I want to understand why this can't be done by the caller (or why it would be much worse). Hopefully it becomes clear as I read the rest of the code.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will be removed with the integration of ss_fill tracking into range_tree itself.

* 1) it must NOT be an embedded BP
* 2) it must have no more than 1 DVA
* 3) it must be a level=0 (leaf) block, otherwise we need to
* read it right away to use it in metadata traversal
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this restriction (level==0) shouldn't be necessary

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed, new code can reorder any level.

avl_node_t qzio_addr_node;
list_node_t qzio_list_node;
} qzio_nodes;
} qzio_t;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's rename this since it isn't any sort of zio.

Not sure about this, but consider embedding the qblkptr_t's fields into this structure.

typedef struct {
range_seg_t ss_rs;
avl_node_t ss_size_node;
uint64_t ss_fill;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ss_fill should be implemented in the same layer as the range tree gap code (i.e. in the range_tree itself, unless we want to move the gap code into the caller as well).

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll move the implementation to be internal to range_tree.

ASSERT(scn->scn_is_sorted);

if (scn->scn_phys.scn_state == DSS_FINISHING ||
scn->scn_checkpointing || shutdown) {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shutdown: will this take a long time?

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Turns out this is a mechanism that is a leftover from some older iterations. So I'll kill the whole "shutdown" thing.

* to parallelize processing of all top-level vdevs as much as possible.
*/
static void
dsl_scan_queues_run_one(queue_run_info_t *info)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we may need to make this issue a few (say 1000) zio_t's to each device before moving on to the next device. This will ensure that we keep all devices busy even if it takes a bunch of CPU to issue.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, I'll rewrite the queue handling so that we create a taskq with "ncpus" worth of threads and then divide the vdevs we need to handle evenly between them.

* one DVA present (copies > 1), because there's no sensible way to sort
* these (how do you sort a queue based on multiple contradictory
* criteria?). So we exclude those as well. Again, these are very rarely
* used for leaf blocks, usually only on metadata.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pardon an ignorant question, but why the restriction on multiple DVAs? Shouldn't it be possible to put each DVA separately into the queues and sort by linear address as usual? (That is, make dsl_scan_queue_insert call bp2qio once for each DVA?) As it stands, it seems that this will not sort anything from copies=2 datasets, e.g.?

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The reason for this is that we would need to split up the blkptr_t and create several "fake" 1-DVA ones to pass to zio_read(), because otherwise zio_read() handles all DVAs at the same time. At this stage, it was deemed more hassle than it's worth, given that using copies=2 for large datasets is quite rare. I dunno, maybe it's a trivial change. I'll give this some more thought.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the change would be pretty small, and worthwhile because it would speed up the traversal (block discovery). You would need to create a qzio_t (or whatever we're calling them now) for each DVA.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've renamed it to "scan_qio_t" - hope that's an acceptable name.
And I can confirm that it works, I just did a quick prototype. It even improves performance a little, exactly as you had predicted.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like what we're enqueing is a DVA, so it might make sense to include that in the name, e.g. scan_dva_t?

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We're queuing more than that, it's actually a whole set of parameters needed to construct a zio, hence why I wanted to include "io" somewhere in the name while keeping it reasonably short.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It isn't a big deal so I won't insist, but here's my thinking:

The big picture of this is that we are collecting the block pointers and then later issuing the scrub i/os (but actually we are collecting each DVA separately). This data structure is used to collect the BP's / DVA's. It's true that it tells us that we want to later issue a zio for the DVA that it holds. I guess it's a matter of opinion which aspect is more relevant to naming the structure.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

whole set of parameters needed to construct a zio

Specifically, it's:

  • the abbreviated BP
  • the bookmark
  • the zio_flags, which are not really needed because they are constant for a given scan.

I'd argue that's essentially "the BP" (or "the DVA"), and the bookmark is not conceptually important (e.g. it could be omitted and the impact would only be reduced precision of error reporting).

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I personally lean in favor of even dropping the "q" from the name and just calling it a scan_io_t. I'm currently working on renaming some of the internal functions, because many of the names are confusingly similar (e.g. "dsl_scan_queue_insert" for the block sorting queues and "scan_queue_insert" for the queue datasets to examine). I'd like to rename stuff having to do with the block sorting queues to "scan_io_queue_..." and the dataset queue to "scan_ds_queue_...".

@@ -1329,21 +1813,22 @@ dsl_scan_visit(dsl_scan_t *scn, dmu_tx_t *tx)

if (spa_version(dp->dp_spa) < SPA_VERSION_DSL_SCRUB) {
VERIFY0(dmu_objset_find_dp(dp, dp->dp_root_dir_obj,
enqueue_cb, tx, DS_FIND_CHILDREN));
enqueue_cb, NULL, DS_FIND_CHILDREN));
} else {
dsl_scan_visitds(scn,
dp->dp_origin_snap->ds_object, tx);
}
ASSERT(!scn->scn_pausing);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Forgive another ignorant question, but, while I recognize that this is not a change introduced here, it's not obvious to me why this assertion holds? I don't see anything, from a few minutes look at dsl_scan_visitds that would ensure that we don't set this flag in this particular case?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can pause when visiting block pointers (from dsl_scan_visitbp()). The dp_origin_snap does not have any block pointers, so we can't pause while visiting it. Visiting it serves only to add its "next clones" to the work queue.

uint64_t sio_prop;
uint64_t sio_phys_birth;
uint64_t sio_birth;
zio_cksum_t sio_cksum;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When ZFS encryption is integrated, we'll need more fields here. (IIRC, at least blk_fill and dva[2])

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When that happens, we can make the structure somewhat polymorphic to allow for additional fields in case the block is encrypted.


typedef struct scan_io {
dva_t sio_dva;
uint64_t sio_prop;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is a copy of blk_prop, right? Maybe we should name it sio_blk_prop to make that extra clear (as opposed to some other properties).

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Renamed.

dva_t sio_dva;
uint64_t sio_prop;
uint64_t sio_phys_birth;
uint64_t sio_birth;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you remember why we need both logical and physical births for the scrub io? It would be nice to have a comment somewhere explaining that, so that nobody tries to "optimize" it later.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No idea, really. All I did was store the crucial bits that originally went into constructing the zio_read(). I didn't really give much thought to the birth numbers. I'll try to hunt down the exact reason later.

@@ -1329,21 +1813,22 @@ dsl_scan_visit(dsl_scan_t *scn, dmu_tx_t *tx)

if (spa_version(dp->dp_spa) < SPA_VERSION_DSL_SCRUB) {
VERIFY0(dmu_objset_find_dp(dp, dp->dp_root_dir_obj,
enqueue_cb, tx, DS_FIND_CHILDREN));
enqueue_cb, NULL, DS_FIND_CHILDREN));
} else {
dsl_scan_visitds(scn,
dp->dp_origin_snap->ds_object, tx);
}
ASSERT(!scn->scn_pausing);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can pause when visiting block pointers (from dsl_scan_visitbp()). The dp_origin_snap does not have any block pointers, so we can't pause while visiting it. Visiting it serves only to add its "next clones" to the work queue.

@@ -1355,22 +1840,21 @@ dsl_scan_visit(dsl_scan_t *scn, dmu_tx_t *tx)
bzero(&scn->scn_phys.scn_bookmark, sizeof (zbookmark_phys_t));

/* keep pulling things out of the zap-object-as-queue */
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

update comment

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

@nwf
Copy link

nwf commented Mar 15, 2017

I went to test this on a complicated pool in the middle of a resilver:

  pool: tank
 state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Wed Mar 15 23:21:41 2017
    541M scanned out of 4.19T at 3.03M/s, 402h26m to go
    88.2M resilvered, 0.01% done
config:

        NAME                                              STATE     READ WRITE CKSUM
        tank                                              ONLINE       0     0     0
          raidz2-0                                        ONLINE       0     0     0
            spare-0                                       ONLINE       0     0     0
              ata-ST2000DM001-1CH164_W1E2ZW9Q             ONLINE       0     0     0
              ata-Hitachi_HUA723020ALA640_MK0272YGGNK5AB  ONLINE       0     0     0  (resilvering)
            ata-TOSHIBA_DT01ACA200_Y2R3PH9AS              ONLINE       0     0     0
            ata-ST2000DM001-1CH164_Z1E8DXVW               ONLINE       0     0     0
            ata-WDC_WD20EFRX-68AX9N0_WD-WMC1T2373561      ONLINE       0     0     0
            ata-TOSHIBA_DT01ACA200_Y2Q4V7PAS              ONLINE       0     0     0
            ata-WDC_WD20EFRX-68AX9N0_WD-WMC1T2773124      ONLINE       0     0     0
        logs
          mirror-1                                        ONLINE       0     0     0
            wwn-0x500a07510964db6c-part1                  ONLINE       0     0     0
            ata-OCZ-SOLID3_OCZ-41N1418P1721ZPC8-part1     ONLINE       0     0     0
        cache
          ata-OCZ-SOLID3_OCZ-41N1418P1721ZPC8-part3       ONLINE       0     0     0
          ata-Crucial_CT120M500SSD1_14030964DB6C-part3    ONLINE       0     0     0
        spares
          ata-Hitachi_HUA723020ALA640_MK0272YGGNK5AB      INUSE     currently in use
          ata-WDC_WD20EZRZ-00Z5HB0_WD-WCC4M5JXUCS3        AVAIL   

by zpool exporting it and pointing my userland test program at it. I almost immediately got:

tree->avl_numnodes > 1
ASSERT at ../../module/avl/avl.c:1014:avl_destroy_nodes()Aborted

gdb says that this occurred thusly:

(gdb) bt
#0  0x00007fe439ae0fdf in raise () from /lib/x86_64-linux-gnu/libc.so.6
#1  0x00007fe439ae240a in abort () from /lib/x86_64-linux-gnu/libc.so.6
#2  0x00007fe43b25dac0 in libspl_assert (buf=buf@entry=0x7fe43b2632bb "tree->avl_numnodes > 1", func=func@entry=0x7fe43b263540 <__FUNCTION__.4475> "avl_destroy_nodes", line=line@entry=1014, file=0x7fe43b26311f "../../module/avl/avl.c") at ../../lib/libspl/include/assert.h:40
#3  0x00007fe43b25eb69 in avl_destroy_nodes (tree=tree@entry=0x7fe3dc0182f0, cookie=cookie@entry=0x7fe3ed34dbf8) at ../../module/avl/avl.c:1014
#4  0x00007fe43ac8dc95 in dsl_scan_io_queue_destroy (queue=queue@entry=0x7fe3dc0182a0) at ../../module/zfs/dsl_scan.c:2786
#5  0x00007fe43ac8e199 in scan_io_queues_destroy (scn=0xded0f0) at ../../module/zfs/dsl_scan.c:2856
#6  dsl_scan_done (scn=scn@entry=0xded0f0, complete=complete@entry=B_FALSE, tx=tx@entry=0x7fe3dc4679d0) at ../../module/zfs/dsl_scan.c:566
#7  0x00007fe43ac8e474 in dsl_scan_sync (dp=dp@entry=0xde04c0, tx=tx@entry=0x7fe3dc4679d0) at ../../module/zfs/dsl_scan.c:2004
#8  0x00007fe43acb14f4 in spa_sync (spa=spa@entry=0xd82460, txg=txg@entry=11906475) at ../../module/zfs/spa.c:6629
#9  0x00007fe43acc370e in txg_sync_thread (dp=0xde04c0) at ../../module/zfs/txg.c:545
#10 0x00007fe43ac11cdc in zk_thread_helper (arg=0x121adb0) at kernel.c:140
#11 0x00007fe439e53424 in start_thread (arg=0x7fe3ed34f700) at pthread_create.c:333
#12 0x00007fe439b969bf in clone () from /lib/x86_64-linux-gnu/libc.so.6

Suggestions for what to look at? Is it possible that I am violating some invariant by invoking spa_scan(spa, POOL_SCAN_SCRUB); when there's already a resilver in progress?

@skiselkov
Copy link
Owner Author

@nwf Directly calling spa_scan shouldn't be giving you any trouble - worst case is you'll get rejected with EBUSY. Why this occurred is a bit of a mystery to me. That assertion failure in avl.c is due to an inconsistency in the tree - this would indicate that somebody else is manipulating that tree in parallel without locking it. This shouldn't happen, because dsl_scan_io_queue_destroy is only invoked from syncing context.
I'll try to reproduce this tomorrow and see if this a porting issue or an implementation problem.

@nwf
Copy link

nwf commented Mar 16, 2017

Because my line numbers are probably unique to me, "../../module/zfs/dsl_scan.c:2004" is the call to dsl_scan_done() inside the test for dsl_scan_restarting() in dsl_scan_sync(). Lemme run this with ZFS_DEBUG and see if there's anything more interesting before the assert trips.

@nwf
Copy link

nwf commented Mar 16, 2017

Well, running again, it looks like the system quiesces the scan thread

dsl_scan_ddt: visiting ddb=1/0/8/18a91de167a1c
dsl_scan_ddt: visiting ddb=1/0/8/18a91de3efc9e
dsl_scan_ddt: visiting ddb=1/0/8/18a91de6d5343
dbuf_create: ds=mos obj=63 lvl=0 blkid=331615 db=0x7f3c0d701cb0
dsl_scan_ddt: visiting ddb=1/0/8/18a91e1947413
dsl_scan_check_pause: pausing at DDT bookmark 1/0/8/18a91e1947413
dsl_scan_ddt: scanned 974 ddt entries with class_max = 1; pausing=1
dsl_scan_sync: visited 974 blocks in 3502ms
dbuf_dirty: ds=mos obj=31 lvl=0 blkid=-1 size=40

does a transaction sync (elided) and then

Scrub: ts=1489630017 opentxg=11908347 sorted=1 state=1 to_ex=  4606261088256 exd=   601861189632 pend=   559140839424 iss=    42720350208 pr=     7055605760  DDT bookmark 1/0/8/18a91e1947413 zbookmark 0/0/0/0
dsl_resilver_restart: restarting resilver txg=11908347
txg_wait_synced: txg=11908350 quiesce_txg=11908348 sync_txg=11908350
txg_wait_synced: broadcasting sync more tx_synced=11908346 waiting=11908350 dp=0x1e39910
txg_sync_thread: txg=11908347 quiesce_txg=11908348 sync_txg=11908350
dbuf_dirty: ds=mos obj=1 lvl=0 blkid=1 size=4000
dnode_setdirty: ds=mos obj=1 txg=11908347
dbuf_dirty: ds=mos obj=mdn lvl=0 blkid=0 size=4000
dbuf_dirty: ds=mos obj=mdn lvl=1 blkid=0 size=4000
dnode_free_range: ds=mos obj=8537 blkid=0 nblks=36028797018963968 txg=11908347
dbuf_free_range: ds=mos obj=8537 start=0 end=0
dnode_setdirty: ds=mos obj=8537 txg=11908347
dbuf_dirty: ds=mos obj=mdn lvl=0 blkid=266 size=4000
dbuf_dirty: ds=mos obj=mdn lvl=1 blkid=2 size=4000
dnode_free: dn=0x7f3cf802d2f0 txg=11908347
tree->avl_numnodes > 1
ASSERT at ../../module/avl/avl.c:1014:avl_destroy_nodes()Aborted

The Scrub: line is output from my wrapper.

Hope that provides some insight.

@skiselkov
Copy link
Owner Author

skiselkov commented Mar 16, 2017

I can't reproduce this. I tried to export & import in the middle of a spare replacement, but nothing broke.
This bug seems to be produced from a concurrent thread operating on the pool. Can you get a complete list of threads & stack traces from the affected system?
Btw: I noticed a big bug in the code today. Feel free to pull in the latest commit, it might help (shot in the dark though).

@ahrens
Copy link

ahrens commented Mar 18, 2017

@skiselkov It looks like something is confused about the diff here. Maybe if you rebased on top of master, that would fix it? Or point me at the URL that will render just your changes.

@skiselkov
Copy link
Owner Author

@ahrens Sure, can do. Give me a moment. Although this commit is not ready for upstreaming yet, I have some more changes I want to make (namely stabilize the range_tree rework and taskq restructuring that you have requested). The one I'm confused about is openzfs#172 - the TRIM PR. That's where I can't reproduce the test failure that zettabot spotted.

@skiselkov skiselkov changed the base branch from master to master2 March 18, 2017 17:17
@skiselkov
Copy link
Owner Author

@ahrens Rebase on top of master complete.

@nwf
Copy link

nwf commented Mar 18, 2017

@skiselkov I pulled in the range_tree_add fix and re-ran my wrapper and it ran the resilver to completion, so perhaps that was all that was wrong. Thanks!

@skiselkov
Copy link
Owner Author

@nwf Thanks for the update, good to know.

skiselkov added a commit that referenced this pull request Apr 27, 2017
1) Removed the first-fit allocator.
2) Moved the autotrim metaslab scheduling logic into vdev_auto_trim.
2a) As a consequence of #2, metaslab_trimset_t was rendered superfluous. New
   trimsets are simple range_tree_t's.
3) Made ms_trimming_ts remove extents it is working on from ms_tree and then
   add them back in.
3a) As a consequence of illumos#3, undone all the direct changes to the allocators and
   removed metaslab_check_trim_conflict and range_tree_find_gap.
skiselkov added a commit that referenced this pull request Apr 27, 2017
1) Removed the first-fit allocator.
2) Moved the autotrim metaslab scheduling logic into vdev_auto_trim.
2a) As a consequence of #2, metaslab_trimset_t was rendered superfluous. New
   trimsets are simple range_tree_t's.
3) Made ms_trimming_ts remove extents it is working on from ms_tree and then
   add them back in.
3a) As a consequence of illumos#3, undone all the direct changes to the allocators and
   removed metaslab_check_trim_conflict and range_tree_find_gap.
avg-I and others added 5 commits June 13, 2017 13:06
Reviewed by: George Wilson <george.wilson@delphix.com>
Reviewed by: Dan Kimmel <dan.kimmel@delphix.com>
Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com>
Approved by: Dan McDonald <danmcd@joyent.com>
Reviewed by: Pavel Zakharov <pavel.zakharov@delphix.com>
Reviewed by: George Wilson <george.wilson@delphix.com>
Reviewed by: Prashanth Sreenivasa <pks@delphix.com>
Reviewed by: Robert Mustacchi <rm@joyent.com>
Approved by: Dan McDonald <danmcd@joyent.com>
Reviewed by: Andrew Stormont <andyjstormont@gmail.com>
Reviewed by: Robert Mustacchi <rm@joyent.com>
Approved by: Dan McDonald <danmcd@joyent.com>
Reviewed by: Gordon Ross <gordon.w.ross@gmail.com>
Reviewed by: John Kennedy <john.kennedy@delphix.com>
Reviewed by: Robert Mustacchi <rm@joyent.com>
Approved by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: Jason King <jason.brian.king@gmail.com>
Reviewed by: Robert Mustacchi <rm@joyent.com>
Approved by: Gordon Ross <gwr@nexenta.com>
rmustacc and others added 29 commits October 5, 2017 17:07
Reviewed by: Jerry Jelinek <jerry.jelinek@joyent.com>
Reviewed by: Igor Kozhukhov <igor@dilos.org>
Reviewed by: Yuri Pankov <yuripv@gmx.com>
Reviewed by: Dale Ghent <daleg@elemental.org>
Approved by: Richard Lowe <richlowe@richlowe.net>
Reviewed by: Robert Mustacchi <rm@joyent.com>
Approved by: Gordon Ross <gwr@nexenta.com>
Reviewed by: Yuri Pankov <yuripv@gmx.com>
Reviewed by: John Kennedy <jwk404@gmail.com>
Reviewed by: Toomas Soome <tsoome@me.com>
Approved by: Dan McDonald <danmcd@joyent.com>
Reviewed by: Dan McDonald <danmcd@joyent.com>
Reviewed by: Ken Mays <maybird1776@yahoo.com>
Approved by: Richard Lowe <richlowe@richlowe.net>
Reviewed by: Andrew Stormont <andyjstormont@gmail.com>
Reviewed by: Gordon Ross <gordon.w.ross@gmail.com>
Reviewed by: Robert Mustacchi <rm@joyent.com>
Approved by: Dan McDonald <danmcd@joyent.com>
Reviewed by: Yuri Pankov <yuripv@gmx.com>
Reviewed by: Sebastian Wiedenroth <wiedi@frubar.net>
Reviewed by: Robert Mustacchi <rm@joyent.com>
Approved by: Dan McDonald <danmcd@joyent.com>
Reviewed by: Jerry Jelinek <jerry.jelinek@joyent.com>
Reviewed by: Robert Mustacchi <rm@joyent.com>
Approved by: Gordon Ross <gwr@nexenta.com>
Reviewed by: Ryan Zezeski <rpz@joyent.com>
Reviewed by: Robert Mustacchi <rm@joyent.com>
Reviewed by: Patrick Mooney <patrick.mooney@joyent.com>
Reviewed by: Igor Kozhukhov <igor@dilos.org>
Reviewed by: Garrett D'Amore <garrett@damore.org>
Reviewed by: Andy Stormont <astormont@racktopsystems.com>
Approved by: Richard Lowe <richlowe@richlowe.net>
Reviewed by: Yuri Pankov <yuripv@gmx.com>
Reviewed by: Robert Mustacchi <rm@joyent.com>
Approved by: Richard Lowe <richlowe@richlowe.net>
Reviewed by: Dan Fields <dan.fields@nexenta.com>
Reviewed by: Evan Layton <evan.layton@nexenta.com>
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
Reviewed by: Hans Rosenfeld <rosenfeld@grumpf.hope-2000.org>
Reviewed by: Toomas Soome <tsoome@me.com>
Approved by: Richard Lowe <richlowe@richlowe.net>
Reviewed by: Alex Deiter <alex.deiter@nexenta.com>
Reviewed by: Evan Layton <evan.layton@nexenta.com>
Reviewed by: Toomas Soome <tsoome@me.com>
Approved by: Richard Lowe <richlowe@richlowe.net>
Reviewed by: Dan McDonald <danmcd@joyent.com>
Reviewed by: Hans Rosenfeld <hans.rosenfeld@joyent.com>
Reviewed by: Ken Mays <maybird1776@yahoo.com>
Reviewed by: Yuri Pankov <yuripv@gmx.com>
Approved by: Richard Lowe <richlowe@richlowe.net>
Contributed by: Frank Salzmann <frank@delphix.com>
Contributed by: Pavel Zakharov <pavel.zakharov@delphix.com>
Reviewed by: Toomas Soome <tsoome@me.com>
Reviewed by: Ken Mays <maybird1776@yahoo.com>
Reviewed by: Igor Kozhukhov <igor@dilos.org>
Approved by: Dan McDonald <danmcd@joyent.com>
Reviewed by: Robert Mustacchi <rm@joyent.com>
Approved by: Richard Lowe <richlowe@richlowe.net>
Reviewed by: Dan McDonald <danmcd@joyent.com>
Reviewed by: Yuri Pankov <yuripv@gmx.com>
Reviewed by: Igor Kozhukhov <igor@dilos.org>
Reviewed by: Robert Mustacchi <rm@joyent.com>
Approved by: Richard Lowe <richlowe@richlowe.net>
Reviewed by: C Fraire <cfraire@me.com>
Reviewed by: Igor Kozhukhov <igor@dilos.org>
Reviewed by: Robert Mustacchi <rm@joyent.com>
Approved by: Richard Lowe <richlowe@richlowe.net>
Reviewed by: Robert Mustacchi <rm@joyent.com>
Reviewed by: Yuri Pankov <yuripv@gmx.com>
Approved by: Richard Lowe <richlowe@richlowe.net>
Reviewed by: Dan McDonald <danmcd@joyent.com>
Reviewed by: Jerry Jelinek <jerry.jelinek@joyent.com>
Reviewed by: Patrick Mooney <patrick.mooney@joyent.com>
Reviewed by: Ken Mays <maybird1776@yahoo.com>
Approved by: Richard Lowe <richlowe@richlowe.net>
Reviewed by: Igor Kozhukhov <igor@dilos.org>
Reviewed by: Alexander Pyhalov <apyhalov@gmail.com>
Approved by: Dan McDonald <danmcd@joyent.com>
Reviewed by: Robert Mustacchi <rm@joyent.com>
Reviewed by: Igor Kozhukhov <igor@dilos.org>
Approved by: Dan McDonald <danmcd@joyent.com>
Reviewed by: Robert Mustacchi <rm@joyent.com>
Reviewed by: C Fraire <cfraire@me.com>
Reviewed by: Igor Kozhukhov <igor@dilos.org>
Approved by: Dan McDonald <danmcd@joyent.com>
Reviewed by: Toomas Soome <tsoome@me.com>
Reviewed by: Robert Mustacchi <rm@joyent.com>
Reviewed by: Ken Mays <maybird1776@yahoo.com>
Reviewed by: Yuri Pankov <yuripv@gmx.com>
Approved by: Dan McDonald <danmcd@joyent.com>
5593 Recent versions of svcadm invoked for multiple FMRIs say "Partial FMRI matches multiple instances"
8688 svcadm does not handle multiple partial FMRI arguments
Reviewed by: Dominik Hassler <hadfl@omniosce.org>
Reviewed by: Chris Fraire <cfraire@me.com>
Reviewed by: Jerry Jelinek <jerry.jelinek@joyent.com>
Approved by: Robert Mustacchi <rm@joyent.com>
8635 epoll should not emit POLLNVAL
8636 recursive epoll should emit EPOLLRDNORM
Reviewed by: Jerry Jelinek <jerry.jelinek@joyent.com>
Reviewed by: Robert Mustacchi <rm@joyent.com>
Reviewed by: Toomas Soome <tsoome@me.com>
Reviewed by: Igor Kozhukhov <igor@dilos.org>
Approved by: Dan McDonald <danmcd@joyent.com>
Reviewed by: C Fraire <cfraire@me.com>
Reviewed by: Ken Mays <maybird1776@yahoo.com>
Approved by: Dan McDonald <danmcd@joyent.com>
Reviewed by: Yuri Pankov <yuripv@gmx.com>
Reviewed by: C Fraire <cfraire@me.com>
Approved by: Dan McDonald <danmcd@joyent.com>
Reviewed by: Yuri Pankov <yuripv@gmx.com>
Approved by: Dan McDonald <danmcd@joyent.com>
Reviewed by: Prakash Surya <prakash.surya@delphix.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Approved by: Dan McDonald <danmcd@joyent.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.