Reconstruction of a hand’s motion with electroencephalography (EEG) signals is a challenging problem that has not been solved yet. Most related studies rely on a motion tracking system to record a sequence of hand coordinate values paired with biosignals, in order to train a mapping function between them. For amputees, this approach is not possible. There are also only a few studies about how different training techniques may affect the accuracy of a motion reconstruction system. A virtual avatar for presenting different upper limb motions was developed. Subjects were asked to follow the avatar’s motion, while the subject’s EEG and electromyography (EMG) signals were recorded and paired with avatar’s hand trajectory values. This task was performed under three conditions: repeating the motion by memory, repeating the motion while watching it on a screen, and repeating the motion while seeing it in virtual reality (VR). We did not find any significant difference between the three conditions in terms of correlation values. Still, we found that using both EEG and EMG at the same time led to a better result than using only one of them. Additionally, significant differences were found in the EEG activity, suggesting that even if the task (moving the arm) was the same for the three conditions, the brain dynamics were different. Specifically, we found that using the VR resulted in a higher alpha desynchronization during the motion. Finally, our results, when only EEG signals were used, were comparable with other studies that have used a motion tracking system.