The problems of the traditional gangue sorting robot arm control algorithms such as the grasping function method and the dynamic target grasping algorithm based on Ferrary method are relying on an accurate environment model and lacking adaptivity in the control process. At the same time, the problems of the traditional intelligent control algorithms such as deep deterministic policy gradient (DDPG) are excessive output actions and sparse rewards that are easily covered. In order to solve these problems, this study improves the neural network structure and reward function in the traditional DDPG algorithm, and proposes an improved DDPG algorithm based on reinforcement learning, which is suitable for handling six-degree-of-freedom gangue sorting robot arms. After the gangue enters the working space of the robot arm, the improved DDPG algorithm can make decisions according to the gangue position and robot arm state returned by the corresponding sensor, and can output a set of joint angle state control quantity to the corresponding motion controller. The algorithm can control the movement of the robot arm according to the gangue position and joint angle state control quantity, so that the robot arm moves to the nearby gangue to conduct gangue sorting. The simulation results show that compared with the traditional DDPG algorithm, the improved DDPG algorithm has the advantages of model-free versatility and adaptive learning of grasping pose in interaction with the environment. Moreover, the improved algorithm can be the first to converge to the maximum reward value encountered during exploration. The robot arm controlled by the improved DDPG algorithm has better policy generalization, smaller joint angle state control output and higher gangue sorting efficiency.