Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support single audio mix server mode to support large ensembles and simplify the mixing #599

Closed
corrados opened this issue Sep 19, 2020 · 53 comments · Fixed by #1381
Closed
Labels
feature request Feature request

Comments

@corrados
Copy link
Contributor

See here:

A simple approach with only little changes in the Jamulus server code is wanted. Specification:

Adding a new command line argument to the server like --singlemix:

  • No multithreading (since we only have one encoding and mixing so no multithreading needed)
  • Only 128 sample frame size support
  • Only Mono support (gives us the most possible number of connected clients which is what this modification is all about)
  • The first connected client on that server is the "director", all other clients which connect afterwards get his mix. So you just have to make sure that the director is already connecting to the server before your session begins (this requirement should be very easily to be fulfilled).

There is a vecvecbyCodedData buffer which is used for both, encoding and decoding. I'll introduce a separate buffer so that I can re-use the output buffer of the first client for all other clients. So instead of calling MixEncodeTransmitData for all the other clients, they simply get vecChannels[iCurChanID].PrepAndSendPacket ( &Socket, vecvecbyCodedData[iDirectorID], iCeltNumCodedBytes );. I just did a quick hack: If I modify CreateChannelList so that no client is added, the audio mixer panel is just empty. This would be the case for the slave clients. But then they do not see how many clients are currently connected which is not a big issue. If "--singlemix" is given, "-F" and "-T" is deactivated and a warning is shown that these cannot be combined. In the OnNetTranspPropsReceived function we can check that the client uses 128 samples, if not, refuse to connect.

There is a new branch were the implementation is done:
https://github.com/corrados/jamulus/tree/feature_singlemixserver

@corrados
Copy link
Contributor Author

@storeilly, @kraney, @maallyn I updated the code on the feature_singlemixserver branch with the following functionality:

  • The master "director" client defines the mode, i.e. if mono/stereo and the audio quality.
  • The slave clients have to use the same mono/stereo and audio quality settings as the director, otherwise their client will show "TRYING TO CONNECT" all the time until it automatically disconnects after a while.
  • Only the master "director" sees the full audio mixer board faders and can control the audio mix. All slave clients will see an empty audio mixer board.

It would be great to get feedback from you:

  • Is the source code stable?
  • Does it work according to the above specification?
  • How many clients does a fast server PC can serve in that mode (I would recommend to use Mono mode to get the most number of clients)?

@storeilly
Copy link

storeilly commented Sep 19, 2020

Will each client be able to balance their own input in the mix?, if not I can imagine one of two scenarios.
1 some singers will turn their gain too high to hear themselves against the mix and the director turning their fader back down to compensate ending with the singer clipping.
2 some singers turning their gain too low to reduce themselves and the director doing the reverse.
I'm not convinced this use case will be popular or workable sorry

@ann0see
Copy link
Member

ann0see commented Sep 19, 2020

I really like this feature since it has a huge potential. If really only the "master client" can do the mix I don't see any problem @storeilly?

What I see as a problem is that the clients just see "TRYING TO CONNECT". I'd prefer (at least for newer clients) to show "Please set the audio quality/... as follows: [...]" or even a server message which sets them automatically. If you don't do that it would probably rise quite some support questions and problems.

@storeilly
Copy link

storeilly commented Sep 19, 2020 via email

@pljones
Copy link
Collaborator

pljones commented Sep 19, 2020

Is there any way what's done could be tuned to having each participant (apart from the director) see only their own channel control? This wouldn't save server side processing - you'd still have n mixes to create. But it retains simplicity in the UI whilst preserving control.

The levels would be set by the director, apart from a participant's own level, which they could set for themselves. Perhaps it would still need a "balance" so you could raise your own level against the rest of the mix, or in this mode, your own slider becomes "balance" -- at the bottom, you only hear everyone else, at the top, you only hear yourself, in the middle you hear the director's mix.

(I'd also think a "list participants" option would be needed. Otherwise I'd be wondering "who's here?" all the time.)


Of course, my other question is -- how does this affect the jam recorder? Is the emit AudioFrame still done before the mix? This seems like it would answer one of my questions on the thread for recording a pre-mixed file - the director's mix is what you'd record. (Though I'd still advise a new signal/slot, it needs the filename handled properly and the file not included in the projects.)

@kraney
Copy link

kraney commented Sep 19, 2020

To me the scenario that leads to clipping sounds pretty passive-aggressive and like it could best be resolved by the director mentioning out loud that he’s having to dial down (or up) someone’s volume. It doesn’t seem like an indictment of the entire feature.

I haven’t had a chance to try it yet for time zone reasons but I should be able to run it during this weekend.

I have a little concern with the “first to connect” solution for selecting a director only because, supposing it goes wrong and someone accidentally joins before the director, how can it be resolved? Everyone has to leave and then wait for confirmation by the director via some other channel before rejoining? However I understand the desire to minimize the disruption to the code.

One other idea for selection comes to mind - maybe the director is the first one to join who chooses a baton as his instrument?

@corrados
Copy link
Contributor Author

If they can't adjust their own perception of their volume in the mix without affecting how others hear it, they will compensate and use the control that they have which does. They will sing too loud or too quiet (depending on personality type). Any attempt by an outside force (the director) to balance the mix will force them to increase the compensation to detrimental levels.

That may or may not be. I think this strongly depends on how the singers can adapt to the mix. For some it may be simple to adapt, for some it may not be possible to sing. But this can only be evaluated on a real test with a lot of singers, I guess.

I'd prefer (at least for newer clients) to show "Please set the audio quality/... as follows: [...]" or even a server message which sets them automatically.

Sure, a lot things could be done. The current code implements the basic functionality. If it turns out that this mode is useless, all additional effort would be wasted. So the first step is to verify that this mode works with a lot of singers (which are all instructed before entering the server what they have to set in the Jamulus client). If that was successful, more time could be spent to improve that server mode.

Is there any way what's done could be tuned to having each participant (apart from the director) see only their own channel control? This wouldn't save server side processing - you'd still have n mixes to create.

Yes, exactly. If you want any individual mix, you have to use the normal mode.

I'd also think a "list participants" option would be needed. Otherwise I'd be wondering "who's here?" all the time.

The server could send a Chat message when a new client enters the server. But as I wrote above, before any more time is spent, it has to be verified that this new mode works in a real scenario and turns out to be useful.

Of course, my other question is -- how does this affect the jam recorder?

The jam recorder is not affected. It works the same as in the normal server mode.

I have a little concern with the “first to connect” solution for selecting a director only because, supposing it goes wrong and someone accidentally joins before the director, how can it be resolved?

No problem. You can simply swap the places at the server. The one who currently has control leaves the server. Then the director leaves the server and immediately re-connects to the server. Now the director has the control.

@kraney
Copy link

kraney commented Sep 19, 2020

The swap sounds simple for a small group, but really problematic with a group of 60 people who are all joining a rehearsal that is starting, each according to his own timing.

But I do want to say I really appreciate the very fast turnaround on this, and the approach to try with the simplest change first then refine once the principle is demonstrated.

@kraney
Copy link

kraney commented Sep 19, 2020

I ran a test using the current master first to serve as a baseline. For my test I connect once using a normal client I can listen to, then add a bunch of headless clients that are just silent and produce no audio out. (Connected to a jack dummy audio.) Because of this I can't be 100% sure that they work equivalent to a normal client. However I did check that if one of those is "master", I can connect successfully with a normal client and get an empty mixer, but hear audio. I'm using 2vCPUs on a cloud server. Intel Skylake architecture I believe.

In this test I was able to connect 62 clients, but the last few didn't go smoothly. The user name took a long time to appear after the channel appeared. The sound quality degraded somewhat. CPU usage was around 75-80% out of 100 on a two core system. That is, about 160% usage out of 200 in the linux way of reporting it.

I then ran a test using singlemix. With this, I was able to connect 80 clients with no immediate issues. Sound quality was good. CPU usage was around 46% out of 100, or about 92% / 200 in the linux way of reporting it.

However, it only stayed stable for about a minute. After time passed, without any new clients, it shifted into a very degraded mode. Sound came in short, regular choppy bursts, about half on and half off, like 120 bpm pulses. CPU usage dropped to about 13%. Stopping the synthetic clients does not immediately resolve it, but it does go back to normal once the server times the clients out of its list.

This was in normal quality, 128 byte buffer, Mono.

@kraney
Copy link

kraney commented Sep 20, 2020

There's one longer term use case I'd like to support that might suggest some requirements for this mode. I'm looking into the ability to re-transmit the Jamulus audio into a YouTube Live or Facebook Live stream. It seems pretty straightforward to do this using SoundFlower on a mac so you can capture what is coming from Jamulus.

In such a situation, the director might likely want to send a click track to the band members to control the tempo and dynamics. So it would be useful to have a second mix not tied to the director one, where that one click track can be muted for the audio that will be retransmitted.

@maallyn
Copy link

maallyn commented Sep 22, 2020

Folks:

I just did a compile of the feature-singlemixserver pulled as of about 10 PM Pacific U.S. Time on Monday, Sept 21, 2020. It is on my server newark-music.allyn.com, which is a four CPU dedicated server at Linode in Newark, N.J. I did confirm that only the first connect has the sliders and others dont.

However, I did notice the following warnings on the compile output:

=====================================================================

usr/include/x86_64-linux-gnu/qt5/QtWidgets -isystem /usr/include/x86_64-linux-gnu/qt5/QtGui -isystem /usr/include/x86_64-linux-gnu/qt5/QtNetwork -isystem /usr/include/x86_64-linux-gnu/qt5/QtXml -isystem /usr/include/x86_64-linux-gnu/qt5/QtConcurrent -isystem /usr/include/x86_64-linux-gnu/qt5/QtCore -I. -I. -I/usr/lib/x86_64-linux-gnu/qt5/mkspecs/linux-g++-64 -o clientdlg.o src/clientdlg.cpp
src/clientdlg.cpp:738:66: warning: unused parameter ‘strVersion’ [-Wunused-parameter]
QString strVersion )
^
src/clientdlg.cpp:751:68: warning: unused parameter ‘strVersion’ [-Wunused-parameter]
QString strVersion )
^
g++ -c -m64 -pipe -O2 -std=c++0x -D_REENTRANT -Wall -W -fPIC -DAPP_VERSION="3.5.10git" -DOPUS_BUILD -DUSE_ALLOCA -DCUSTOM_MODES -D_REENTRANT -DHAVE_LRINTF -DHAVE_STDINT_H -DQT_NO_DEBUG -DQT_WIDGETS_LIB -DQT_GUI_LIB -DQT_NETWORK_LIB -DQT_XML_LIB -DQT_CONCURRENT_LIB -DQT_CORE_LIB -I. -Isrc -Ilibs/opus/include -Ilibs/opus/celt -Ilibs/opus/silk -Ilibs/opus/silk/float -Ilibs/opus/silk/fixed -isystem /usr/include/x86_64-linux-gnu/qt5 -isystem /usr/include/x86_64-linux-gnu/qt5/QtWidgets -isystem /usr/include/x86_64-linux-gnu/qt5/QtGui -isystem /usr/include/x86_64-linux-gnu/qt5/QtNetwork -isystem /usr/include/x86_64-linux-gnu/qt5/QtXml -isystem /usr/include/x86_64-linux-gnu/qt5/QtConcurrent -isystem /usr/include/x86_64-linux-gnu/qt5/QtCore -I. -I. -I/usr/lib/x86_64-linux-gnu/qt5/mkspecs/linux-g++-64 -o serverdlg.o src/serverdlg.cpp
src/serverdlg.cpp:647:68: warning: unused parameter ‘strVersion’ [-Wunused-parameter]
QString strVersion )
^
g++ -c -m64 -pipe -O2 -std=c++0x -D_REENTRANT -Wall -W -fPIC -DAPP_VERSION="3.5.10git" -DOPUS_BUILD -DUSE_ALLOCA -DCUSTOM_MODES -D_REENTRANT -DHAVE_LRINTF -DHAVE_STDINT_H -DQT_NO_DEBUG -DQT_WIDGETS_LIB -DQT_GUI_LIB -DQT_

==============================================================

The chat message does show the git log as well as a warning that this server will have faders only on the first connection.

This server is on the central server listing for All Genres.

Mark

@kraney
Copy link

kraney commented Sep 30, 2020

I spent some time poking around with the audio mix code and I've found some changes that improve the maximum scale even without resorting to the singlemix option.

Specifically

  • There is a 1ms or 2ms budget for completion of the mix code (depending on buffer size) before the next timer already fires before it finishes. This limits the number of mixes operations you can do in a thread. The number of mix options per channel grows, therefore the number of mix channels per block must shrink, as the number of channels grows.
  • The Qt default thread pool defaults to "optimal" which is one thread per CPU core. So on a 4 core system, you get 4 threads no matter how many the mix code tries to launch. This puts a ceiling on scale unless it's changed. About 2 threads per core seems to be the limit I was able to use before you exceed the time budget anyway due to contention.

With tweaks to those items, I have been able to scale to 65 channels on an 8-core system with no notable audio issues; none of the quirkiness I was noticing in prior tests. That's with

  • Mono-in / Stereo-out
  • 128 buffers
    (all clients)
    and uses 345% CPU.

Note that I only seem to be able to use about 40% of each CPU core on this system before I exceed the time budget for mixing. It might actually be possible to improve the results by doing the mix for half of the channels at a 0.5ms time offset from the other half, at least for this CPU architecture. That is, don't roll all of the threads into the same FutureSynchronizer.

I also tried adding an indexed lookup for channels by hostname in hopes of making PutAudioData more efficient, but I don't believe it made a significant difference. PutAudioData is not the bottleneck.

diff --git a/src/server.cpp b/src/server.cpp
index bbdcd949..4e4ab03e 100755
--- a/src/server.cpp
+++ b/src/server.cpp
@@ -427,6 +427,8 @@ CServer::CServer ( const int          iNewMaxNumChan,
         vecChannels[i].SetEnable ( true );
     }
 
+    QThreadPool::globalInstance()->setMaxThreadCount(QThread::idealThreadCount()*4);
+
 
     // Connections -------------------------------------------------------------
     // connect timer timeout signal
@@ -694,6 +696,7 @@ void CServer::OnCLDisconnection ( CHostAddress InetAddr )
     if ( iCurChanID != INVALID_CHANNEL_ID )
     {
         vecChannels[iCurChanID].Disconnect();
+        hashChannelIndex.remove(InetAddr);
     }
 }
 
@@ -1042,8 +1045,11 @@ static CTimingMeas JitterMeas ( 1000, "test2.dat" ); JitterMeas.Measure(); // TE
         // processing with multithreading
         if ( bUseMultithreading )
         {
-// TODO optimization of the MTBlockSize value
-            const int iMTBlockSize = 20; // every 20 users a new thread is created
+            QFutureSynchronizer<void> FutureSynchronizer;
+            // Each thread must complete within the 1 or 2ms time budget for the timer.
+            const int iMaximumMixOpsInTimeBudget = 500;  // Approximate limit as observed on GCP e2-standard instance
+                                                         // TODO - determine at startup by running a small benchmark
+            const int iMTBlockSize = iMaximumMixOpsInTimeBudget / iNumClients; // number of ops = block size * total number of clients
             const int iNumBlocks   = static_cast<int> ( std::ceil ( static_cast<double> ( iNumClients ) / iMTBlockSize ) );
 
             for ( int iBlockCnt = 0; iBlockCnt < iNumBlocks; iBlockCnt++ )
@@ -1065,7 +1071,6 @@ static CTimingMeas JitterMeas ( 1000, "test2.dat" ); JitterMeas.Measure(); // TE
 
             // make sure all concurrent run threads have finished when we leave this function
             FutureSynchronizer.waitForFinished();
-            FutureSynchronizer.clearFutures();
         }
     }
     else
@@ -1480,21 +1485,8 @@ int CServer::GetNumberOfConnectedClients()
 
 int CServer::FindChannel ( const CHostAddress& CheckAddr )
 {
-    CHostAddress InetAddr;
-
-    // check for all possible channels if IP is already in use
-    for ( int i = 0; i < iMaxNumChannels; i++ )
-    {
-        // the "GetAddress" gives a valid address and returns true if the
-        // channel is connected
-        if ( vecChannels[i].GetAddress ( InetAddr ) )
-        {
-            // IP found, return channel number
-            if ( InetAddr == CheckAddr )
-            {
-                return i;
-            }
-        }
+    if (hashChannelIndex.contains(CheckAddr)) {
+        return hashChannelIndex[CheckAddr];
     }
 
     // IP not found, return invalid ID
@@ -1592,6 +1584,8 @@ bool CServer::PutAudioData ( const CVector<uint8_t>& vecbyRecBuf,
         {
             // in case we have a new connection return this information
             bNewConnection = true;
+            // also remember in the index
+            hashChannelIndex[HostAdr] = iCurChanID;
         }
     }
 
diff --git a/src/server.h b/src/server.h
index 15644c8f..acf96ab0 100755
--- a/src/server.h
+++ b/src/server.h
@@ -324,7 +324,6 @@ protected:
 
     // variables needed for multithreading support
     bool                      bUseMultithreading;
-    QFutureSynchronizer<void> FutureSynchronizer;
 
     bool CreateLevelsForAllConChannels  ( const int                        iNumClients,
                                           const CVector<int>&              vecNumAudioChannels,
@@ -334,6 +333,7 @@ protected:
     // do not use the vector class since CChannel does not have appropriate
     // copy constructor/operator
     CChannel                   vecChannels[MAX_NUM_CHANNELS];
+    QHash<CHostAddress,int>    hashChannelIndex;
     int                        iMaxNumChannels;
     CProtocol                  ConnLessProtocol;
     QMutex                     Mutex;
diff --git a/src/util.h b/src/util.h
index a8d7b3b3..88ee1117 100755
--- a/src/util.h
+++ b/src/util.h
@@ -834,6 +834,10 @@ public:
     quint16      iPort;
 };
 
+inline uint qHash(const CHostAddress& adr, uint seed) {
+    return qHash(adr.InetAddr, seed) + qHash(adr.iPort, seed);
+}
+
 
 // Instrument picture data base ------------------------------------------------
 // this is a pure static class

@kraney
Copy link

kraney commented Sep 30, 2020

Better yet, don’t use a FutureSynchronizer at all and have a separate timer loop per channel block. Then
you might not need to expand the thread pool. Slightly tricky to adjust the block size in that case though.

@WolfganP
Copy link

Sounds interesting @kraney. I also consider the OnTimer() triggering is critical to the process (#455 (comment)), as it occurs asynchronous to the audio processing stuff, and I think timer exhaustion (ie decompress+mix+compress loop not finished before next timer triggers) should be logged somehow to get a sense if the system isn't introducing audio artifacts due to overload. Did you implement any probe to detect that case?

Anyways, looking at Qt docs (https://doc.qt.io/qtforpython/PySide2/QtCore/QTimer.html#accuracy-and-timer-resolution) I'm unsure how Jamulus reacts if OnTimer retriggers while still active in the processing loop due to overload.

All timer types may time out later than expected if the system is busy or unable to provide the requested accuracy. In such a case of timeout overrun, Qt will emit timeout() only once, even if multiple timeouts have expired, and then will resume the original interval.

@corrados
Copy link
Contributor Author

With tweaks to those items, I have been able to scale to 65 channels on an 8-core system with no notable audio issues; none of the quirkiness I was noticing in prior tests. That's with Mono-in / Stereo-out,128 buffers.

That is interesting. With your settings (stereo, 128 samples) we already had a report of supporting 100 clients, see #455 (comment). The question is what was different when brynalf did his test. Certainly, his CPU is faster but 65 vs 100 clients is a big difference.

Better yet, don’t use a FutureSynchronizer at all and have a separate timer loop per channel block.

I would say that this is no practical way. pljones wrote a nice picture about the threading situation: #455 (comment). So we have one part which decodes all OPUS packets and then we have multiple threads which work on that data. If you have different timers, how to deal with the common part at the beginning since each client needs the decoded audio data from all other clients.

@kraney
Copy link

kraney commented Sep 30, 2020

Looking at #455, brynalf is on a 32-core CPU which would hide the main things I ran into with a smaller number of cores. idealThreadCount would be 32 already, so it would probably not be a limiting factor. I could run with more cores but it's not really practical for a cloud-hosted instance; it would get rather expensive. And really, the capabilities of the 8-core instance I was using are greatly underutilized.

For the common decoded audio data - I would say it seems like it would make sense to preserve the decoded data in a circular buffer. Each independent OnTimer would "top it up" with the latest new data and expire an equal amount of the oldest data, then work on the result.

It looks like #455 would be the more appropriate place to continue the discussion - sorry to hijack this ticket.

@kraney
Copy link

kraney commented Sep 30, 2020

Or another thought, why preserve the incoming encoded packets at all? Why not decode them on the way in, and store only the decoded data?

@kraney
Copy link

kraney commented Sep 30, 2020

It would help if at the very least the next decode task could start before the mix tasks from the prior timer have completed. That would let CPU utilization get closer to 100% per core.

@chrisrimple
Copy link

Question on the implementation for this: After the Leader has joined the server, do all other users see no faders, or faders that are inactive, or what? I think they shouldn't see faders that appear to be active, but should still see Profile Names of connected users.

@chrisrimple
Copy link

To @storeilly's concerns above, I still think there's huge value for choral groups in this feature. Imagine an in-person rehearsal:

  • Everyone's standing close
  • Everyone's likely unamplified
  • Everyone's listening to everyone else
  • Everyone's balancing their output (singing volume) to fit in the group sound
  • The Leader is telling anyone out of balance to adjust

There's no individual mixer controls - everyone hears the group sound "unmixed", then adjusts their individual output to fit. The Jamulus equivalent would be to send unmixed sound to all users (or to ignore their fader settings), then have individuals adjust their gain + singing volume to compensate, which this addresses.

That doesn't solve for the user who says "I can't hear myself well enough" or "The group isn't loud enough", but that's not a problem that Jamulus can solve. Jamulus' faders are not a control for System Volume - they're a control for "Jamulus within the bounds of System Volume". So if System Volume = 50%, then setting a Jamulus fader to 50% is equivalent to setting System Volume = 25%, and setting a Jamulus fader to 100% is equivalent to setting System Volume to 50%. If the user isn't getting enough volume from a "flat" mix, they need to increase their System Volume (which could be PC sound card or audio interface output), add an in-line mini amp to headphones, etc.

@pljones
Copy link
Collaborator

pljones commented Oct 16, 2020

There's no individual mixer controls - everyone hears the group sound "unmixed", then adjusts their individual output to fit.

Isn't that partially self-contradictory? Each individual has their own mix: it's what reaches their ears, just like in Jamulus. They can control their own input level - no one else has control over that, just like in Jamulus.

ignore their fader settings

That's right. And it's generally how I play. I expect people in the group to use their ears and adjust their input levels so that the group is hearing a good mix.

It's strange the difference in etiquette between NINJAM and Jamulus. In NINJAM, it's very much seen as "your problem" if you're too loud or quiet. You're affecting everyone else in the group, so it's down to you to fix it. In Jamulus, there's much more acceptance that everyone has control over what they hear and is expected to handle whatever someone else does -- even though it's affecting everyone in the group.

@mhilpolt
Copy link

mhilpolt commented Jan 6, 2021

Thanks you so much for your recommendation!

Either all of your clients are < 3.6.0 or all your clients must be >= 3.6.0.

As this branch is based on 3.5.10, I have to use clients < 3.6.0.
Or do you see a chance to rebase this branch to the master?

Why are you using an old version on the Raspberry Pi?

I use the Raspberry JackTripFoundation-Image and they offer a really easy web interface to handle the login. I can register my (private) server and invite all users. Only the mic gain and output level can be adjusted, no much possibility for mistakes...
Or do you know another easy to use Jamulus web gui for the RasPi?

@nefarius2001
Copy link
Contributor

nefarius2001 commented Jan 8, 2021

Hello and thanks for Jamulus in general!
We are "ramping up" Jamulus in our 23-people choire, too. We had a test run with 5 clients today, and already experienced vast differences in levels (& delays).

I like the "delegation" idea a lot, should that become a separate ticket? In my dreams, this would include downstreaming the slider positions, so I can derive my personal from the conductor's mix. (edited:) I just found that is just another step from there:
#756

Either all of your clients are < 3.6.0 or all your clients must be >= 3.6.0.

All clients including or excluding the serverinstance?

@corrados
Copy link
Contributor Author

corrados commented Jan 8, 2021

Or do you see a chance to rebase this branch to the master?

I have just merge the latest code into the feature_singlemixserver. Can you please test if it now works fine for you?

@mhilpolt
Copy link

mhilpolt commented Jan 8, 2021

Wow, that's great. I will test on Sunday.

@mhilpolt
Copy link

mhilpolt commented Jan 10, 2021

I've just tested singlemixserver branch after merge with master and it works the following way:

  1. If "director" client is <= 3.6.0, all other clients need to have versions <= 3.6.0
  2. if "director" client is > 3.6.0, all other clients need to have version > 3.6.0

This behavior is independent of the version of the server instance, works with server version 3.5.10 and 3.6.2.

Is there a way, to overcome this restriction in the singlemixserver branch, because this does not apply to master branch?

@cwerling
Copy link

cwerling commented Mar 25, 2021

Hi fellow musicians!

I created a new rudimentary solution to the --singlemix mode cherry-picking a few of Volker's original commits as well as some of my own changes.

  • The first joiner to the server will be the master and will control everybody's gain and pan
  • Everybody's gain and pan will be the master's except for their own channel which is muted
  • Everybody will see the usual controls still (in order to see the connected clients)
  • All the buttons (except from "GRP" assignments) are useless for the non-masters
  • This fork is based on the latest master (as of writing)

I implemented this for our mixed choir of around 25 people. We start our weekly rehearsals with a soundcheck where previously everybody had to create a good mix (which is hard and time-consuming).

Now our conductor can do it for all of us which should improve the sound quality a lot. It's paramount to us that we don't hear Jamulus monitoring. Most of us don't use either no monitoring or local monitoring (from their interface).

Would be glad to have some people test this! You don't need custom clients, just a custom server build.

Next steps to make this feature-proof:

  • As a client: Find out that the server uses --singlemix and maybe even identify the 'master' client (implicitly through index 0 or explicitly through server broadcast)
  • Show any hint in the UI that this server has a central mix (e.g. by disabling all controls except GRP)
  • Test test test

Happy to hear any feedback!

@ann0see
Copy link
Member

ann0see commented Mar 25, 2021

Great! Thank you very much!

Everybody's gain and pan will be the master's except for their own channel which is muted

It would be great if this could be controled on the client side. (To be able to follow rule one).

Everybody will see the usual controls still (in order to see the connected clients)

For long term: I think this should not be the case (but for a PoC that's ok): Hide all the level meters/group/mute/pan/... and give a prominent message: "Single mix enabled".

@iainhallam
Copy link

Show any hint in the UI that this server has a central mix (e.g. by disabling all controls except GRP)

It would be great if the clients could have disabled controls that show the mix that the master has decided.

@WolfganP
Copy link

Hi fellow musicians!

I created a new rudimentary solution to the --singlemix mode cherry-picking a few of Volker's original commits as well as some of my own changes.

* The **first joiner** to the server will be the master and will control **everybody's gain and pan**

* Everybody's gain and pan will be the master's **except for their own channel which is muted**

* Everybody will see the usual controls still (in order to see the connected clients)

* All the buttons (except from "GRP" assignments) are useless for the non-masters

* This fork is based on the latest master (as of writing)

I implemented this for our mixed choir of around 25 people. We start our weekly rehearsals with a soundcheck where previously everybody had to create a good mix (which is hard and time-consuming).

Now our conductor can do it for all of us which should improve the sound quality a lot. It's paramount to us that we don't hear Jamulus monitoring. Most of us don't use either no monitoring or local monitoring (from their interface).

Would be glad to have some people test this! You don't need custom clients, just a custom server build.

Next steps to make this feature-proof:

* [ ]  As a client: Find out that the server uses `--singlemix` and maybe even identify the 'master' client (implicitly through index `0` or explicitly through server broadcast)

* [ ]  Show any hint in the UI that this server has a central mix (e.g. by disabling all controls except GRP)

* [ ]  Test test test

Happy to hear any feedback!

Excellent feature, thx for the implementation. What happens if the conductor (first client) disconnects? (for any reason).
Will the conductor role be inherited by the next in line?

@cwerling
Copy link

Hi all! I got a little update (now on a different branch on my fork):

  • The server now sends its --singlemix state to the client through a new channel (copied from inspired by the recording state channel)
  • Clients who see that SM_ENABLED that are not channel ID 0 (so "non-masters") will disable fader, pan, and mute controls and read Single mix at myserver instead of Personal mix at
  • The client who sees that SM_ENABLED that is channel ID 0 (so the "master") won't disable any controls and read Single mix (you control!) at myserver

It would be great if [muting oneself] could be controled on the client side. (To be able to follow rule one).

This could be done by not disabling one's own mute / fader control?

Excellent feature, thx for the implementation. What happens if the conductor (first client) disconnects? (for any reason).
Will the conductor role be inherited by the next in line?

That's a good question and needs to be tested. In my understanding, the first one to connect to a server get's the channel ID 0 and keeps it for longer than he's connected. After disconnecting and connecting I still got ID 0 and it was not passed on in the meantime.

If that's correct, I suggest to keep it that primitive right now, since all other solutions of authorizing/delegating the 'master' role seem much more complex and error-prone to me. Unless somebody has a really good 'simple' idea? :)

It would be great if the clients could have disabled controls that show the mix that the master has decided.
Very good point. In the current implementation, people won't see the actual mix represented in their (disabled) controls. We have two overall options to implement the server-side if this:

Option A: The master sets only his own mix (vecvecfGains[0]) on the server and the clients get the master's mix values instead of their own (vecvecfGains[i]). (as implemented right now, see code snippet below)

Option B: The master sets everybody's mixes on the server (so the server would actually update vecvecfGains[i] for all clients i). Open question here: Does the client UI update itself when the server values changed? (Not sure if there's a use case for a server-side change right now)

for ( j = 0; j < iNumClients; j++ )
{
    // get a reference to the audio data and gain of the current client
    const CVector<int16_t>& vecsData = vecvecsData[j];
{
    float fGain    = vecvecfGains[iChanCnt][j];

    if ( bSingleMixServerMode )
    {
        // overwrite gain with the gain of the master client (same mix for everybody, except ...)
        fGain    = vecvecfGains[0][j];
        // mute people's own channel (we don't want them to hear themselves in the common mix!)
        if ( iChanCnt == j )
        {
            fGain = 0.0f;
        }
    }
}

Would really appreciate somebody testing the current implementation, at least server-side (so without any visual hints on the client side, but same audio behaviour).

Cheers!

@mhilpolt
Copy link

Great thing, I'm happy that there are some new ideas and implementations for the chorus branch of Jamulus! I'm using the singlemixserver branch for rehearsals in my chorus since two month - it works great. We are around 40 singers - it works stable and smooth. But in our setup, all participants - except the master - are using a Raspberry PI with a VirtualStudio - image from JackTrip. It has a very basic Web-UI - perfect for singers with basic PC knowledge and for all not to be deviated from the sheet music.
As I'm the one, who controls the mixing console, I'm missing the feature of a "solo" or "PFL" of the individual channels only for the "master", but all others shall still listen themselves. This feature would be really helpful when trying to find a channel, which is noisy or has a bad connection during the rehearsal.
If I press the "solo" button in the current implementation, all others also hear the "solo"-ed channel only, so it is useless in the singlemixserver context.

@cwerling
Copy link

I need your input! I am still not sure what the most "Jamulus way" is to tackle the Jamulus-routed monitoring:

Method 1: Always mute people's own channel in the single mixes (this is currently implemented on my fork)
Pro: Fully disabled Mixer UI, easy to understand
Con: Nobody can have Jamulus-routed monitoring

Method 2: Let non-masters control at least their own channel in the mix
Pro: People can decide if and how loud they wanna hear themselves through Jamulus-routed monitoring
Con: Partially disabled mixer board, more complex logic,

Method 2 gets even more nasty when you think about this: Who controls the pan/gain of the master herself on the common mix? If the master controls it for everybody, she effectively cannot mute herself (which wouldn't work for our choir's conductor).

But if the client controls the master's gain/pan, the UI gets even more complicated: For non-masters, all controls will be disabled except for both one's own channel as well as the master's channel. This is pretty complex for a user to understand imho.

In terms of KISS I would prefer Method 1. Also, in our choir use case nobody relies on Jamulus-routed monitoring.

What do you guys think?

@ann0see
Copy link
Member

ann0see commented Mar 28, 2021

The problem is that the "official" rule is to listen to the signal from the server. What about adding a Parameter to the single mix Parameter which enables/disables the own signal for all singers?

@mhilpolt
Copy link

I read in some threads that Jamulus monitoring is a must to be able to guess the delay and to hear my voice with the same delay as all others. Otherwise, I get the impression, that I'm faster than the others if I switch off Jamulus monitoring. It is mentioned as a source of slow down the speed.
That makes sense to me and in my chorus, nobody has the possibility to switch monitoring off. It is a matter of getting accustomed to that kind of reverb and it is only remarkable it you speak or sing alone, in the chorus sound, I cannot recognize it!
My opinion with some experience !

@cwerling
Copy link

cwerling commented Mar 28, 2021

The problem is that the "official" rule is to listen to the signal from the server.

Can you elaborate on that one?

What about adding a Parameter to the single mix Parameter which enables/disables the own signal for all singers?

That's a good extension to Method 1 I think. We could have --singlemix and --singlemix-nomonitoring as server flags. The distinction would be client-agnostic.

I read in some threads that Jamulus monitoring is a must to be able to guess the delay and to hear my voice with the same delay as all others. Otherwise, I get the impression, that I'm faster than the others if I switch off Jamulus monitoring.

In my personal (choral singing) experience, this is not an issue at all. Rather, I am barely able to speak a few sentences while hearing myself with the overall delay (of let's say 50ms) without going crazy. :)

But use cases might especially differ for non-vocal instruments and so I don't want to just rule our use case for everybody (i.e. never use Jamulus-routed monitoring).

So as @ann0see suggested: Have --singlemix and --singlemix-nomonitoring, where the latter disables one's own channel for everybody?

@ann0see
Copy link
Member

ann0see commented Mar 28, 2021

Better have --singlemix "monitor|no-monitor".

For the official rule see the getting started page on jamulus.io and Volkers' paper: https://jamulus.io/PerformingBandRehearsalsontheInternetWithJamulus.pdf

@cwerling
Copy link

I'm done with the GUI changes and created a (to-be-reviewed) PR here: #1381

Suggest to continue the discussion over there :)

@gilgongo
Copy link
Member

gilgongo commented Apr 6, 2021

Locking this issue now so as to keep discussions in fewer places. See also #1380

@jamulussoftware jamulussoftware locked as too heated and limited conversation to collaborators Apr 6, 2021
@ann0see
Copy link
Member

ann0see commented Apr 22, 2022

Hi, Sorry for the maintenance noise here. I’m just into triaging issues.

Since this issue is locked, and another discussion is linked by gilgongo it doesn't make sense to leave this issue open. Therefore, I'll close it.

Please continue any related discussion in the other discussion.

Note to the topic: See the feature branch: https://github.com/cwerling/jamulus-mastermix/tree/feature_singlemix_reloaded which any interested person can take up if needed. Please open a new issue if you plan to improve it.

@ann0see ann0see closed this as completed Apr 22, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
feature request Feature request
Projects
None yet