Skip to content

Conversation

charlesxu0124
Copy link
Collaborator

After further testing and usage, we found that applying a reward penalty for effective gripper actions (command close when open and vice versa) speeds up the training of tasks that involve learning to grasp. Specifically, it prevents the policy from excessively opening and closing the gripper. The code has been updated to provide a -0.1 reward penalty by default -- a value that we found to work well for most tasks.

Also, we edited the environment so that gripper actions are executed in a blocking fashion. The environment waits until the gripper fully opens or closes before returning the observation. This also reduces the training time by making the transitions more Markovian.

@charlesxu0124 charlesxu0124 requested a review from jianlanluo June 25, 2024 02:10
@charlesxu0124 charlesxu0124 marked this pull request as ready for review June 25, 2024 02:10
@jianlanluo jianlanluo merged commit 2f9f315 into main Jun 25, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants