-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Extremely slow access to epochs after channel remapping. #7947
Comments
it's really hard to debug/profile without a script to replicate.
|
Got it, I will post it tomorrow. Thanks for the reply. |
Unfortunately, I can't replicate the issue with sample data. The reason is probably that in my data I have around 400 Event IDs whereas here one has only 4. I'm flipping 200 out of these 400 IDs. Here I'm only flipping 2 out of 4. After concatenation of the epoch list with the 400 IDs of which 200 have flipped channels, the issue arises when I try to create evokeds out of them. The reason I'm flipping the channels is that it is a tactile paradigm and I want to pool right stimuli with left stimuli by flipping them. The procedure here is the same as in my code: MNE version: 0.20.7
|
@KSuljic you can make fake events using sample data to replicate the issue |
Thank you for the hint. I could replicate the issue to some extend but in my code the difference is stronger: Time to evoke unflipped epochs: 0:00:00.056981 It seems not much but in my analysis it makes out of 0.8s -> 3.5h. Thank you for your aid. ####
#Prepare Example Data
from mne.datasets import sample
data_path = sample.data_path()
fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
raw = mne.io.read_raw_fif(fname, preload=True)
info = mne.create_info(ch_names=OriginalChannels, sfreq=500, ch_types='eeg')
events = mne.find_events(raw, stim_channel='STI 014')
#Example Factors
moving = ['moving_no', 'moving_yes']
attention = ['attention_no', 'attention_yes']
phase = ['active', 'passive', 'stationary']
stim = ['single', 'empty']
side = ['side_left', 'side_right']
closeness = ['stim_close', 'stim_far']
factors = [side, phase, stim, closeness, moving, attention]
conditions = list(itertools.product(*factors))
conditions_agg = ['/'.join(c) for c in conditions]
condition_no = np.arange(1, len(conditions_agg))
condition_ids_dict = dict(zip(conditions_agg, condition_no))
#Creating Fake Events
import random
Events = np.empty((0,3))
for i in np.arange(20):
x = random.uniform(0.9, 1.1)
events_sim = events.copy()
events_sim[:,0] = events[:,0] * x
Events = np.concatenate((Events, events_sim))
import math
no_rep = math.ceil(len(Events)/len(condition_no))
event_ids_list = [condition_no] * no_rep
event_ids_list = np.concatenate((event_ids_list))
Events[:,2] = event_ids_list[:len(Events)]
Events = Events.astype('int')
#Create epochs
raw = raw.pick('eeg')
raw.info = info
epochs = mne.Epochs(raw, Events, event_id=condition_ids_dict, event_repeated='drop', preload=True)
#####
Epochs_list = []
for k in list(epochs.event_id.keys()):
epoch = epochs[k]
Epochs_list.append(epoch)
Epochs_concat = mne.concatenate_epochs(Epochs_list)
#####
#Evoking
import datetime
#untouched epochs
a = datetime.datetime.now()
evoked_l = epochs['side_left'].average()
evoked_r = epochs['side_right'].average()
b = datetime.datetime.now()
print(f'Time to evoke untouched epochs: {b-a}')
#concatenated epochs
a = datetime.datetime.now()
evoked_remap_l = Epochs_concat['side_left'].average()
evoked_remap_r = Epochs_concat['side_right'].average()
b = datetime.datetime.now()
print(f'Time to evoke concatenated epochs: {b-a}') |
I confirm the weirdness:
@larsoner any hint? |
The issue is also reproducible without mne.rename_channels(), only by appending the epochs to a list and concatinating with mne.concatenate_epochs(). |
In that case, can you edit your post above to simplify the code? The more minimal your example is, the easier it is for us to track down the problem. |
Done. It seems the issue has something to do with appending the epochs and then using mne.concatenate_epochs(). Is there any way to mirror/flip the channels in place (meaning in the original epoch instance)? Then I wouldn't need concatenate_epochs(). I tried rename_channels() on the original epoch instance and reorder_channels(). After that I tried equalize_channels() in both cases. Both times it didn't work. The channels stay the same: for k in list(epochs.event_id.keys()):
if 'side_right' in k:
epochs[k].reorder_channels(RemappedChannels) or for k in list(epochs.event_id.keys()):
if 'side_right' in k:
epochs[k].rename_channels(remappingRtoL) EDIT: I just realized that both functions don't operate in place. |
Here is an example that can be copy-pasted:
The output is:
i.e., the problem is One fix would be to make it a tuple of tuple of str, which is immutable. I actually like this change because |
(and it will speed up all |
+1 for using tuples then
… |
Thank you very much for looking into it. |
@agramfort @larsoner Can I fix this by myself anyhow (with tuples?) or do I have to wait for an update? Unfortunately, due to this issue my analysis came to a stop. Thank you! |
You can try doing |
Unfortunately, both didn't work. If I try to subselect it states: |
I'll try to install the dev version in another conda env. It seems you fixed it there already. |
In the dev version it works perfectly (roughly only double the time as you mentioned). Thank you very much! |
I have two epoch lists. In one of the lists all channels have to be mirrored to the other side of the hemisphere (flipped). After that I want to concatenate both lists to one epoch file. So I'm using mne.rename_channels() to remap the channels of the one list. After that I'm using mne.equalize_channels() to get a uniform channel mapping of the two lists. After that I'm using mne.concatenate_epochs() to concat the lists. It works perfectly fine but if I try to access a condition after the concatenation in the resulting epoch instance, things become really slow. Before the access is really quick like instant. After it takes up to 20s. How could I resolve that I tried a lot already. Thank you very much!
The text was updated successfully, but these errors were encountered: