Skip to content

Conversation

derekhiggins
Copy link
Contributor

The server frequently crashes when dealing with 2 clients simultaniously, limit it to a single worker, If a user tries to connect ilab to the server the second command will start its own local server (as it gets a http 503).

We also need to disable keep-alive to prevent consecutive calls in the same client holding the only worker open and DOS'ing itself.

There is still one error case, if the chat client is started but not activly chatting when generate is started. chat will fail when the model is called. Not ideal but not as bad as an hour long "ilab generate" failing.

Fixes #346

Copy link
Member

@hickeyma hickeyma left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for pushing a fix @derekhiggins. Do you mind also adding a note to the README?

@derekhiggins
Copy link
Contributor Author

Thanks for pushing a fix @derekhiggins. Do you mind also adding a note to the README?

Added

@markstur
Copy link
Member

markstur commented Apr 11, 2024

An error I got when running multiple chat sessions and generate with this PR:

>>> Traceback (most recent call last): [S][default]
  File "<string>", line 1, in <module>
  File "/Users/markstur/.pyenv/versions/3.9.18/lib/python3.9/multiprocessing/spawn.py", line 116, in spawn_main
    exitcode = _main(fd, parent_sentinel)
  File "/Users/markstur/.pyenv/versions/3.9.18/lib/python3.9/multiprocessing/spawn.py", line 126, in _main
    self = reduction.pickle.load(from_parent)
  File "/Users/markstur/.pyenv/versions/3.9.18/lib/python3.9/multiprocessing/synchronize.py", line 110, in __setstate__
    self._semlock = _multiprocessing.SemLock._rebuild(*state)
FileNotFoundError: [Errno 2] No such file or directory

@markstur
Copy link
Member

Another error I got when running multiple chat sessions and generate with this PR:

>>> hi                                  [S][default]
INFO 2024-04-11 10:32:02,083 _client.py:1026 HTTP Request: POST http://127.0.0.1:8000/v1/chat/completions "HTTP/1.1 503 Service Unavailable"
INFO 2024-04-11 10:32:02,083 _base_client.py:1040 Retrying request to /chat/completions in 0.895726 seconds
INFO 2024-04-11 10:32:02,986 _client.py:1026 HTTP Request: POST http://127.0.0.1:8000/v1/chat/completions "HTTP/1.1 503 Service Unavailable"
INFO 2024-04-11 10:32:02,986 _base_client.py:1040 Retrying request to /chat/completions in 1.879733 seconds
INFO 2024-04-11 10:32:04,873 _client.py:1026 HTTP Request: POST http://127.0.0.1:8000/v1/chat/completions "HTTP/1.1 503 Service Unavailable"
Unknown error
Executing chat failed with: API issue found while executing chat: Unknown error: <class 'openai.InternalServerError'>

Copy link
Member

@markstur markstur left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm still looking into this, but other than some readme nits and some errors I encountered (see comments) I have one bigger concern...

So are we really sure that we should automatically just keep firing up more servers when we've seen memory pressure, GPU fit, issues? My M3 is doing ok with this so far, but I would not be surprised if low memory laptops reboot (rumors?).

Also these temporary servers don't log so I'm just guessing they use some GPU but eventually don't fit and then ?

I haven't quite figured out if this is safe and desirable yet. Not sure it isn't, but concerned about what the OOM experience might be.

Also, I was going to look into backlog feature of uvicorn (maybe someone already did?)

[update: I think backlog is not useful here. Also, it's looking like most of my OOM concerns are from back when the model was much bigger. It's looking reasonable, but I'm still testing w/ 64G of memory].

README.md Outdated
@@ -194,6 +194,8 @@ The full process is described graphically in the [workflow diagram](./docs/workf
Press CTRL+C to shut down the server.
```

> **NOTE:** The `ilab` server can only server a single client. If two `ilab` clients try to connect to the same ilab server at the same time the second one will fail and instead attempt to start its own temporary server. This will require additional resources on the host machine.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

typo: s/can only server/can only serve/

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

grammarly... probably a comma after "at the same time"

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Needs rewording. Currently I'm seeing first sentence is not needed because it is automatically handled this new way.

Next, "will fail and ...start..." I think that's okay because it's aligned with the logged messaging, but maybe "will fail" can be omitted and "instead attempt to" can be omitted. Yep... say what it will do w/ less wishy-washy details (is just an opinionated edit attempt).

Also if you can find the right words it isn't "the second" it is all subsequent (but I don't like that word).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will fix when updating

@derekhiggins
Copy link
Contributor Author

Another error I got when running multiple chat sessions and generate with this PR:

>>> hi                                  [S][default]
INFO 2024-04-11 10:32:02,083 _client.py:1026 HTTP Request: POST http://127.0.0.1:8000/v1/chat/completions "HTTP/1.1 503 Service Unavailable"
INFO 2024-04-11 10:32:02,083 _base_client.py:1040 Retrying request to /chat/completions in 0.895726 seconds
INFO 2024-04-11 10:32:02,986 _client.py:1026 HTTP Request: POST http://127.0.0.1:8000/v1/chat/completions "HTTP/1.1 503 Service Unavailable"
INFO 2024-04-11 10:32:02,986 _base_client.py:1040 Retrying request to /chat/completions in 1.879733 seconds
INFO 2024-04-11 10:32:04,873 _client.py:1026 HTTP Request: POST http://127.0.0.1:8000/v1/chat/completions "HTTP/1.1 503 Service Unavailable"
Unknown error
Executing chat failed with: API issue found while executing chat: Unknown error: <class 'openai.InternalServerError'>

This looks like the errorcase I called out in the PR description i.e.

There is still one error case, if the chat client is started but not activly chatting when generate is started. chat will fail when the model is called. Not ideal but not as bad as an hour long "ilab generate" failing.

@derekhiggins
Copy link
Contributor Author

An error I got when running multiple chat sessions and generate with this PR:

>>> Traceback (most recent call last): [S][default]
  File "<string>", line 1, in <module>
  File "/Users/markstur/.pyenv/versions/3.9.18/lib/python3.9/multiprocessing/spawn.py", line 116, in spawn_main
    exitcode = _main(fd, parent_sentinel)
  File "/Users/markstur/.pyenv/versions/3.9.18/lib/python3.9/multiprocessing/spawn.py", line 126, in _main
    self = reduction.pickle.load(from_parent)
  File "/Users/markstur/.pyenv/versions/3.9.18/lib/python3.9/multiprocessing/synchronize.py", line 110, in __setstate__
    self._semlock = _multiprocessing.SemLock._rebuild(*state)
FileNotFoundError: [Errno 2] No such file or directory

I haven't seen this happen, did you do anything special to provoke it?

@derekhiggins
Copy link
Contributor Author

I'm still looking into this, but other than some readme nits and some errors I encountered (see comments) I have one bigger concern...

So are we really sure that we should automatically just keep firing up more servers when we've seen memory pressure, GPU fit, issues? My M3 is doing ok with this so far, but I would not be surprised if low memory laptops reboot (rumors?).

Also these temporary servers don't log so I'm just guessing they use some GPU but eventually don't fit and then ?

I haven't quite figured out if this is safe and desirable yet. Not sure it isn't, but concerned about what the OOM experience might be.

Also, I was going to look into backlog feature of uvicorn (maybe someone already did?)

[update: I think backlog is not useful here. Also, it's looking like most of my OOM concerns are from back when the model was much bigger. It's looking reasonable, but I'm still testing w/ 64G of memory].

I did check how much RAM was being used when proposing this and it seemed reasonable (about 2G) but I also have lots of RAM. An alternative could be to just output a message from the chat client to try again later if it gets a 503? I honestly have no idea what happens if 2 servers are run on the same GPU (and I don't have access to one to check)

@markstur
Copy link
Member

Another error... Not sure what happened. This was an ilab generate that was running along with other things. It looks like this one was NOT using a temporary server, so somehow at 68% it got bumped (maybe I ran another generate or chat that stole its server?).

This doesn't seem like a big issue considering I'd say I was abusing concurrency, but it does seem contrary to the objective of not failing an ilab generate at 68% completion. Makes me wonder about the keep-alive setting.

Q> Formulate a pun using "light" and "dance."
I> 
A> Why can't you play hide and go seek in the dark?
Because it's impossible to seek the light! (Using "seek" as in dancing, not searching)

 68%|███████████████████████████████████████████████████████████████████████▍                                 | 68/100 [10:16<03:40,  6.89s/it]DEBUG 2024-04-11 11:41:39,215 generate_data.py:512 Assessing generated samples took 1.61s
DEBUG 2024-04-11 11:41:39,215 generate_data.py:513 Generated 1 instructions(discarded 1), rouged 0, kept 1 instructions
INFO 2024-04-11 11:41:39,217 generate_data.py:451 Selected taxonomy path compositional_skills->writing->freeform->jokes->puns->general
INFO 2024-04-11 11:41:39,233 _client.py:1026 HTTP Request: POST http://127.0.0.1:8000/v1/chat/completions "HTTP/1.1 503 Service Unavailable"
INFO 2024-04-11 11:41:39,233 _base_client.py:1040 Retrying request to /chat/completions in 0.791465 seconds
INFO 2024-04-11 11:41:40,030 _client.py:1026 HTTP Request: POST http://127.0.0.1:8000/v1/chat/completions "HTTP/1.1 503 Service Unavailable"
INFO 2024-04-11 11:41:40,030 _base_client.py:1040 Retrying request to /chat/completions in 1.989727 seconds
INFO 2024-04-11 11:41:42,027 _client.py:1026 HTTP Request: POST http://127.0.0.1:8000/v1/chat/completions "HTTP/1.1 503 Service Unavailable"
Generating dataset failed with the following error: There was a problem connecting to the server Service Unavailable
 68%|███████████████████████████████████████████████████████████████████████▍                                 | 68/100 [10:19<04:51,  9.11s/it]

@markstur
Copy link
Member

Not specific to this PR, but here's a thought... Instead of doing a sneaky ensure_server() to create a secret "temporary" server with no logging, we should either never do that, or always do that (but with logging). I.e., given that we are doing this stuff, why not just have chat and generate always fire up a temporary server. Alternatively, throw an error and never do it (perhaps explicitly do it by force with a flag).

To me it currently looks like we can do this always if we fix the logging (e.g. to a file). The best alternative would be to fix concurrency to actually work with multiple generate/chat clients (but that's tricky w/ current limitations). This place in the middle where we are currently investing is not a good place to be.

That said, I'm still trying to convince myself that this PR is better current state. Trying to be harsh and doing more testing because of the timing.

@derekhiggins
Copy link
Contributor Author

Another error... Not sure what happened. This was an ilab generate that was running along with other things. It looks like this one was NOT using a temporary server, so somehow at 68% it got bumped (maybe I ran another generate or chat that stole its server?).

This doesn't seem like a big issue considering I'd say I was abusing concurrency, but it does seem contrary to the objective of not failing an ilab generate at 68% completion. Makes me wonder about the keep-alive setting.

Q> Formulate a pun using "light" and "dance."
I> 
A> Why can't you play hide and go seek in the dark?
Because it's impossible to seek the light! (Using "seek" as in dancing, not searching)

 68%|███████████████████████████████████████████████████████████████████████▍                                 | 68/100 [10:16<03:40,  6.89s/it]DEBUG 2024-04-11 11:41:39,215 generate_data.py:512 Assessing generated samples took 1.61s
DEBUG 2024-04-11 11:41:39,215 generate_data.py:513 Generated 1 instructions(discarded 1), rouged 0, kept 1 instructions
INFO 2024-04-11 11:41:39,217 generate_data.py:451 Selected taxonomy path compositional_skills->writing->freeform->jokes->puns->general
INFO 2024-04-11 11:41:39,233 _client.py:1026 HTTP Request: POST http://127.0.0.1:8000/v1/chat/completions "HTTP/1.1 503 Service Unavailable"
INFO 2024-04-11 11:41:39,233 _base_client.py:1040 Retrying request to /chat/completions in 0.791465 seconds
INFO 2024-04-11 11:41:40,030 _client.py:1026 HTTP Request: POST http://127.0.0.1:8000/v1/chat/completions "HTTP/1.1 503 Service Unavailable"
INFO 2024-04-11 11:41:40,030 _base_client.py:1040 Retrying request to /chat/completions in 1.989727 seconds
INFO 2024-04-11 11:41:42,027 _client.py:1026 HTTP Request: POST http://127.0.0.1:8000/v1/chat/completions "HTTP/1.1 503 Service Unavailable"
Generating dataset failed with the following error: There was a problem connecting to the server Service Unavailable
 68%|███████████████████████████████████████████████████████████████████████▍                                 | 68/100 [10:19<04:51,  9.11s/it]

OK, I think I see whats going on here, the generate command has the single worker taken while its talking to the server. But generate does some
processing between calls. Locally for me this window is about .1 seconds

If another client hits the server during this window then its not available to the generate command. The best we can do in this case is to increase
the number of retries that generate does and hope that the client is finished on time. e.g. with retries set to 10

INFO 2024-04-11 21:59:17,905 _client.py:1026 HTTP Request: POST http://127.0.0.1:8000/v1/chat/completions "HTTP/1.1 200 OK"
  0%|| 0/100 [00:34<?, ?it/s]
INFO 2024-04-11 21:59:17,975 generate_data.py:451 Selected taxonomy path compositional_skills->writing->freeform->words ->10worddescription
INFO 2024-04-11 21:59:18,057 _client.py:1026 HTTP Request: POST http://127.0.0.1:8000/v1/chat/completions "HTTP/1.1 503 Service Unavailable"
INFO 2024-04-11 21:59:18,058 _base_client.py:1007 Retrying request to /chat/completions in 0.978796 seconds
INFO 2024-04-11 21:59:19,042 _client.py:1026 HTTP Request: POST http://127.0.0.1:8000/v1/chat/completions "HTTP/1.1 503 Service Unavailable"
INFO 2024-04-11 21:59:19,043 _base_client.py:1007 Retrying request to /chat/completions in 1.570016 seconds
INFO 2024-04-11 21:59:20,622 _client.py:1026 HTTP Request: POST http://127.0.0.1:8000/v1/chat/completions "HTTP/1.1 503 Service Unavailable"
INFO 2024-04-11 21:59:20,623 _base_client.py:1007 Retrying request to /chat/completions in 3.476980 seconds
INFO 2024-04-11 21:59:24,108 _client.py:1026 HTTP Request: POST http://127.0.0.1:8000/v1/chat/completions "HTTP/1.1 503 Service Unavailable"
INFO 2024-04-11 21:59:24,109 _base_client.py:1007 Retrying request to /chat/completions in 6.613866 seconds
INFO 2024-04-11 21:59:31,476 _client.py:1026 HTTP Request: POST http://127.0.0.1:8000/v1/chat/completions "HTTP/1.1 503 Service Unavailable"
INFO 2024-04-11 21:59:31,476 _base_client.py:1007 Retrying request to /chat/completions in 6.942563 seconds
INFO 2024-04-11 21:59:38,434 _client.py:1026 HTTP Request: POST http://127.0.0.1:8000/v1/chat/completions "HTTP/1.1 503 Service Unavailable"
INFO 2024-04-11 21:59:38,435 _base_client.py:1007 Retrying request to /chat/completions in 6.272285 seconds
INFO 2024-04-11 21:59:44,722 _client.py:1026 HTTP Request: POST http://127.0.0.1:8000/v1/chat/completions "HTTP/1.1 503 Service Unavailable"
INFO 2024-04-11 21:59:44,723 _base_client.py:1007 Retrying request to /chat/completions in 6.090771 seconds
INFO 2024-04-11 21:59:50,826 _client.py:1026 HTTP Request: POST http://127.0.0.1:8000/v1/chat/completions "HTTP/1.1 503 Service Unavailable"
INFO 2024-04-11 21:59:50,827 _base_client.py:1007 Retrying request to /chat/completions in 6.557009 seconds
INFO 2024-04-11 21:59:57,393 _client.py:1026 HTTP Request: POST http://127.0.0.1:8000/v1/chat/completions "HTTP/1.1 503 Service Unavailable"
INFO 2024-04-11 21:59:57,394 _base_client.py:1007 Retrying request to /chat/completions in 7.626869 seconds
INFO 2024-04-11 22:00:05,034 _client.py:1026 HTTP Request: POST http://127.0.0.1:8000/v1/chat/completions "HTTP/1.1 503 Service Unavailable"
INFO 2024-04-11 22:00:05,035 _base_client.py:1007 Retrying request to /chat/completions in 7.702280 seconds
INFO 2024-04-11 22:00:34,062 _client.py:1026 HTTP Request: POST http://127.0.0.1:8000/v1/chat/completions "HTTP/1.1 200 OK"
Q> In 10 or less words, describe the appearance of fire
I>
A> Orange or yellow light, flickering flames

Maybe the way to go would be to do this PR (with or without the increased reties for generation), and also ensure we fix
#156 , so that generation is less likely to fail but if it does with can be resumed.
I'm happy to work on ^^ aswell if its worthwhile.

Or perhaps just abandon this PR and fix #156 so that even if generation crashes all is not lost as it can be resumed.

@derekhiggins
Copy link
Contributor Author

ok, so to summarize, I think this PR is an improvment to the current situation but its not perfect, so we have options

  1. Merge this to make a crash less likely but also complicate things
  2. Fix How to stop lab generate and save output for continuation later? #156 to that resume works
  3. Make generate and chat always talk to their own server (with logging to file)

I think my preference is to fix #156, and then discus the future of local servers in another forum

@markstur
Copy link
Member

Another interesting behavior. Chat decides to connect to the server (not temporary) and then can't:

% ilab chat
DEBUG 2024-04-11 15:37:55,214 server.py:37 Trying to connect to http://127.0.0.1:8000/v1...
INFO 2024-04-11 15:37:55,259 _client.py:1026 HTTP Request: GET http://127.0.0.1:8000/v1/models "HTTP/1.1 503 Service Unavailable"
INFO 2024-04-11 15:37:55,259 _base_client.py:1040 Retrying request to /models in 0.984833 seconds
INFO 2024-04-11 15:37:56,251 _client.py:1026 HTTP Request: GET http://127.0.0.1:8000/v1/models "HTTP/1.1 503 Service Unavailable"
INFO 2024-04-11 15:37:56,251 _base_client.py:1040 Retrying request to /models in 1.538080 seconds
INFO 2024-04-11 15:37:57,797 _client.py:1026 HTTP Request: GET http://127.0.0.1:8000/v1/models "HTTP/1.1 200 OK"
╭───────────────────────────────────────────────────────────────── system ──────────────────────────────────────────────────────────────────╮
│ Welcome to Chat CLI w/ MERLINITE-7B-Q4_K_M (type /h for help)                                                                             │
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
>>> hi                                                                                                                           [S][default]
INFO 2024-04-11 15:38:02,570 _client.py:1026 HTTP Request: POST http://127.0.0.1:8000/v1/chat/completions "HTTP/1.1 503 Service Unavailable"
INFO 2024-04-11 15:38:02,571 _base_client.py:1040 Retrying request to /chat/completions in 0.890687 seconds
INFO 2024-04-11 15:38:03,467 _client.py:1026 HTTP Request: POST http://127.0.0.1:8000/v1/chat/completions "HTTP/1.1 503 Service Unavailable"
INFO 2024-04-11 15:38:03,467 _base_client.py:1040 Retrying request to /chat/completions in 1.890963 seconds
INFO 2024-04-11 15:38:05,365 _client.py:1026 HTTP Request: POST http://127.0.0.1:8000/v1/chat/completions "HTTP/1.1 503 Service Unavailable"
Unknown error
Executing chat failed with: API issue found while executing chat: Unknown error: <class 'openai.InternalServerError'>

@markstur
Copy link
Member

Yeah regarding the 1, 2, 3 options:

1 - Probably. I'm happy to see running a bunch of these temporary servers isn't so bad on an M1. They coexist better than I thought. Kind of a bummer that this doesn't exactly fix everything.
2 - Definitely need to reintroduce continue. It just wasn't a high enough priority.
3 - For future, we should probably discuss this. I think the sneaky ensure-server needs rethinking.

@markstur
Copy link
Member

@hickeyma @xukai92 @anik120 I'm tempted to merge this, but with my mixed results and the "high bar" for approval right now, I'm wondering if any of you can come to the same conclusion.

@derekhiggins
Copy link
Contributor Author

One more data point, the code to resume partially done generate is still present, simply copy a generated_merlinite....json file into regen.json and restart 'ilab generate'

[derekh@u07 generated]$ cp generated_merlinite-7b-Q4_K_M_2024-04-11T23_48_39.json regen.json

So a crashed generate doesn't mean all is lost, I can push up a PR to add it back into the docs if thats what you want


Another interesting behavior. Chat decides to connect to the server (not temporary) and then can't:

when chat starts if any of the checks to the server succeed it will not start a local server then fail later when generate is using the single worker. (this is the .1 second window I mentioned above)

@derekhiggins
Copy link
Contributor Author

One more data point, the code to resume partially done generate is still present, simply copy a generated_merlinite....json file into regen.json and restart 'ilab generate'

[derekh@u07 generated]$ cp generated_merlinite-7b-Q4_K_M_2024-04-11T23_48_39.json regen.json

So a crashed generate doesn't mean all is lost, I can push up a PR to add it back into the docs if thats what you want

#854

@derekhiggins
Copy link
Contributor Author

derekhiggins commented Apr 12, 2024

@markstur I've been wondering why you are hitting more tracebacks then me and now I see it,
the .1 seconds window where the chat can take the server, is a lot longer for you

See "Assessing generated samples took 1.61s"

Your logs
 68%|███████████████████████████████████████████████████████████████████████▍              | 68/100 [10:16<03:40,  6.89s/it]
 DEBUG 2024-04-11 11:41:39,215 generate_data.py:512 Assessing generated samples took 1.61s
 DEBUG 2024-04-11 11:41:39,215 generate_data.py:513 Generated 1 instructions(discarded 1), rouged 0, kept 1 instructions

vs mine

 69%|███████████████████████████████████████████████████████████         | 69/100 [24:21<05:41, 11.03s/it]
DEBUG 2024-04-12 13:09:36,361 generate_data.py:512 Assessing generated samples took 0.06s
DEBUG 2024-04-12 13:09:36,362 generate_data.py:513 Generated 2 instructions(discarded 0), rouged 0, kept 2 instructions                           

For you the server isn't being used by "generate" for longer windows giving the chat client more of a chance to tie it up

I got lots of CPU cores

Copy link
Member

@xukai92 xukai92 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

happy to merge when the README is fixed as Mark suggested

@derekhiggins
Copy link
Contributor Author

happy to merge when the README is fixed as Mark suggested

Done, I couldn't think of a good word to avoid "subsequent", so reworded a little.

The server frequently crashes when dealing with 2 clients simultaniously,
limit it to a single worker, If a user tries to connect ilab to the server
the second command will start its own local server (as it gets a http 503).

We also need to disable keep-alive to prevent consecutive calls in the
same client holding the only worker open and DOS'ing itself.

There is still one error case, if the chat client is started but not activly
chatting when generate is started. chat will fail when the model is called.
Not ideal but not as bad as an hour long "ilab generate" failing.

Fixes #346

Signed-off-by: Derek Higgins <derekh@redhat.com>
@anik120
Copy link
Contributor

anik120 commented Apr 14, 2024

Thanks for the rigorous testing Mark! And thanks for taking the time to engage in the discussion Derek! This is really helpful. There's a lot going on here obviously, but it looks like the "happy path" of "generate and then chat" is fairly stable for now + we've added a note to the README + we've made a modification to safe guard from an active generation from crashing. I have also been thinking about reassessing ensure_server, exactly for the reasons you folks already discussed, but we can do all of these as follow ups. Merging this now because now feels the right time to do it and let folks test it out instead of delaying it further, gives us time to do follow ups.

@anik120 anik120 merged commit 6db7ece into instructlab:main Apr 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Bug: Lab generate crashing when trying to chat in parallel
6 participants