Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added support for dynamic binding of spi device in spidev. #3

Closed
wants to merge 3 commits into from

Conversation

dgallot
Copy link

@dgallot dgallot commented Jul 23, 2012

This patches does two things :

  • Remove the spi_board_info from bcm2708.c which by default are linked to spidev in this version of the kernel.
  • Add dynamic binding to spidev allowing to choose at runtime which spi device is linked to the spidev driver

This feature is really usefull, because during the development of a spi driver it allows to swtich from the spidev or the currently develloped drvier without rebuilding the kernel and even without rebooting

@bootc
Copy link
Owner

bootc commented Jul 28, 2012

How does this allow you to switch between a different driver and spidev?

Regardless, I don't think I'm going to carry this patch. I really want to limit the patches I carry to those from the official tree rebased against the 3.2.x kernel plus patches that genuinely improve Raspberry Pi hardware drivers. This is much more a subsystem thing; if you can get this upstream or in the official kernel, I'll carry it - but until then, sorry...

@bootc bootc closed this Jul 28, 2012
@dgallot
Copy link
Author

dgallot commented Jul 28, 2012

How does this allow you to switch between a different driver and spidev?
Yes, that the prupose of this patch.

This is much more a subsystem thing; if you can get this upstream or in
the official kernel, I'll carry it - but until then, sorry...
No problem, I understand the fact that you want to limit the number of
patch.
I really belive that the current spi system is not suitable for the
raspberry pi. Since at anytime, you may switch the device which are attach
on the spi port. For the moment each time you need to rebuild a complete
kernel since the spi port are binded to the spidev by default, that why I
try to wrote a solution for this problem.

I will propose the patch to the upsteam and see what they thing about it.

Cheers,
Dominique

On Sat, Jul 28, 2012 at 4:27 PM, Chris Boot <
reply@reply.github.com

wrote:

How does this allow you to switch between a different driver and spidev?

Regardless, I don't think I'm going to carry this patch. I really want to
limit the patches I carry to those from the official tree rebased against
the 3.2.x kernel plus patches that genuinely improve Raspberry Pi hardware
drivers. This is much more a subsystem thing; if you can get this upstream
or in the official kernel, I'll carry it - but until then, sorry...


Reply to this email directly or view it on GitHub:
#3 (comment)

@bootc
Copy link
Owner

bootc commented Jul 28, 2012

On 28/07/2012 17:55, Dominique Gallot wrote:

I really belive that the current spi system is not suitable for the
raspberry pi. Since at anytime, you may switch the device which are attach
on the spi port. For the moment each time you need to rebuild a complete
kernel since the spi port are binded to the spidev by default, that why I
try to wrote a solution for this problem.

SPI is not designed for hot-plugging, so it doesn't really make sense to
change drivers without a reboot at least. You will damage your devices
or the RPi if you connect/disconnect stuff from the SPI bus while it's
powered.

I agree it would be nice for this to be a bit more flexible, but the way
to do that would be with the sysfs binding mechanism. I very much doubt
upstream will agree with the way your patch does this - but good luck.
People on LKML will tell you how to make your code better, so it's all
good! :-)

HTH,
Chris

Chris Boot
bootc@bootc.net

@dgallot
Copy link
Author

dgallot commented Jul 28, 2012

By at anytime, I did not mean hotplugin, but switching off the system and
change the wiring. Like we are doing with an ardurino.

I very much doubt upstream will agree with the way your patch does this
I agree it would be nice for this to be a bit more flexible, but the way
to do that would be with the sysfs binding mechanism

Like they have done it for the i2c system? Indeed it would be more clever,
I will check how they have done that.
I am a little bit affraid of changing stuff that I do not understand,
debugging kernel module is really complex for me.

People on LKML will tell you how to make your code better, so it's all
good! :-)
As you noticed I am not a C developer, at least learning it is already a
win.

Cheers,
Dominique

On Sat, Jul 28, 2012 at 7:33 PM, Chris Boot <
reply@reply.github.com

wrote:

On 28/07/2012 17:55, Dominique Gallot wrote:

I really belive that the current spi system is not suitable for the
raspberry pi. Since at anytime, you may switch the device which are
attach
on the spi port. For the moment each time you need to rebuild a complete
kernel since the spi port are binded to the spidev by default, that why I
try to wrote a solution for this problem.

SPI is not designed for hot-plugging, so it doesn't really make sense to
change drivers without a reboot at least. You will damage your devices
or the RPi if you connect/disconnect stuff from the SPI bus while it's
powered.

I agree it would be nice for this to be a bit more flexible, but the way
to do that would be with the sysfs binding mechanism. I very much doubt
upstream will agree with the way your patch does this - but good luck.
People on LKML will tell you how to make your code better, so it's all
good! :-)

HTH,
Chris

Chris Boot
bootc@bootc.net


Reply to this email directly or view it on GitHub:
#3 (comment)

bootc pushed a commit that referenced this pull request Aug 25, 2012
commit 3cf003c upstream.

Jian found that when he ran fsx on a 32 bit arch with a large wsize the
process and one of the bdi writeback kthreads would sometimes deadlock
with a stack trace like this:

crash> bt
PID: 2789   TASK: f02edaa0  CPU: 3   COMMAND: "fsx"
 #0 [eed63cbc] schedule at c083c5b3
 #1 [eed63d80] kmap_high at c0500ec8
 #2 [eed63db] cifs_async_writev at f7fabcd7 [cifs]
 #3 [eed63df0] cifs_writepages at f7fb7f5c [cifs]
 #4 [eed63e50] do_writepages at c04f3e32
 #5 [eed63e54] __filemap_fdatawrite_range at c04e152a
 #6 [eed63ea4] filemap_fdatawrite at c04e1b3e
 #7 [eed63eb4] cifs_file_aio_write at f7fa111a [cifs]
 torvalds#8 [eed63ecc] do_sync_write at c052d202
 torvalds#9 [eed63f74] vfs_write at c052d4ee
torvalds#10 [eed63f94] sys_write at c052df4c
torvalds#11 [eed63fb0] ia32_sysenter_target at c0409a98
    EAX: 00000004  EBX: 00000003  ECX: abd73b73  EDX: 012a65c6
    DS:  007b      ESI: 012a65c6  ES:  007b      EDI: 00000000
    SS:  007b      ESP: bf8db178  EBP: bf8db1f8  GS:  0033
    CS:  0073      EIP: 40000424  ERR: 00000004  EFLAGS: 00000246

Each task would kmap part of its address array before getting stuck, but
not enough to actually issue the write.

This patch fixes this by serializing the marshal_iov operations for
async reads and writes. The idea here is to ensure that cifs
aggressively tries to populate a request before attempting to fulfill
another one. As soon as all of the pages are kmapped for a request, then
we can unlock and allow another one to proceed.

There's no need to do this serialization on non-CONFIG_HIGHMEM arches
however, so optimize all of this out when CONFIG_HIGHMEM isn't set.

Reported-by: Jian Li <jiali@redhat.com>
Signed-off-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Steve French <smfrench@gmail.com>
[bwh: Backported to 3.2: adjust context]
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
bootc pushed a commit that referenced this pull request Aug 25, 2012
…d reasons

commit 5cf02d0 upstream.

We've had some reports of a deadlock where rpciod ends up with a stack
trace like this:

    PID: 2507   TASK: ffff88103691ab40  CPU: 14  COMMAND: "rpciod/14"
     #0 [ffff8810343bf2f0] schedule at ffffffff814dabd9
     #1 [ffff8810343bf3b8] nfs_wait_bit_killable at ffffffffa038fc04 [nfs]
     #2 [ffff8810343bf3c8] __wait_on_bit at ffffffff814dbc2f
     #3 [ffff8810343bf418] out_of_line_wait_on_bit at ffffffff814dbcd8
     #4 [ffff8810343bf488] nfs_commit_inode at ffffffffa039e0c1 [nfs]
     #5 [ffff8810343bf4f8] nfs_release_page at ffffffffa038bef6 [nfs]
     #6 [ffff8810343bf528] try_to_release_page at ffffffff8110c670
     #7 [ffff8810343bf538] shrink_page_list.clone.0 at ffffffff81126271
     torvalds#8 [ffff8810343bf668] shrink_inactive_list at ffffffff81126638
     torvalds#9 [ffff8810343bf818] shrink_zone at ffffffff8112788f
    torvalds#10 [ffff8810343bf8c8] do_try_to_free_pages at ffffffff81127b1e
    torvalds#11 [ffff8810343bf958] try_to_free_pages at ffffffff8112812f
    torvalds#12 [ffff8810343bfa08] __alloc_pages_nodemask at ffffffff8111fdad
    torvalds#13 [ffff8810343bfb28] kmem_getpages at ffffffff81159942
    torvalds#14 [ffff8810343bfb58] fallback_alloc at ffffffff8115a55a
    torvalds#15 [ffff8810343bfbd8] ____cache_alloc_node at ffffffff8115a2d9
    torvalds#16 [ffff8810343bfc38] kmem_cache_alloc at ffffffff8115b09b
    torvalds#17 [ffff8810343bfc78] sk_prot_alloc at ffffffff81411808
    torvalds#18 [ffff8810343bfcb8] sk_alloc at ffffffff8141197c
    torvalds#19 [ffff8810343bfce8] inet_create at ffffffff81483ba6
    torvalds#20 [ffff8810343bfd38] __sock_create at ffffffff8140b4a7
    torvalds#21 [ffff8810343bfd98] xs_create_sock at ffffffffa01f649b [sunrpc]
    torvalds#22 [ffff8810343bfdd8] xs_tcp_setup_socket at ffffffffa01f6965 [sunrpc]
    torvalds#23 [ffff8810343bfe38] worker_thread at ffffffff810887d0
    torvalds#24 [ffff8810343bfee8] kthread at ffffffff8108dd96
    torvalds#25 [ffff8810343bff48] kernel_thread at ffffffff8100c1ca

rpciod is trying to allocate memory for a new socket to talk to the
server. The VM ends up calling ->releasepage to get more memory, and it
tries to do a blocking commit. That commit can't succeed however without
a connected socket, so we deadlock.

Fix this by setting PF_FSTRANS on the workqueue task prior to doing the
socket allocation, and having nfs_release_page check for that flag when
deciding whether to do a commit call. Also, set PF_FSTRANS
unconditionally in rpc_async_schedule since that function can also do
allocations sometimes.

Signed-off-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants