Saturday 21 March 2009

Tuning EMC2 and Gecko servo drives

Tuning a PID loop usually involves repeatedly perturbing the thing that the loop controls, observing how the error changes over time, and adjusting the error feedback to improve the response.

My CNC router uses Gecko servo drives : a G320 for the Z (up/down) axis and G340s for the X and Y.

These use trim pots for the gain and damping of the PID loop. There's a good set of tuning instructions in the gecko user manual, but I came across some niceties that may be useful to someone else, both in how to measure the error and what I found.

How I Measured

In order to tune the servo I needed to apply an impulse to it in a repeatable way. I wrote a short G-code loop for each axis, to repeatedly rapid move and then reverse rapid move the axis. Here's the one for the x axis:

(restore G92 offsets cleared by M2)
G92.3


G0 x0 y0 z0
(go this far before immediately reversing)
# = 10

# = 0
O100 while [# lt 1000]
G0 x#
G0 x0
(wait for 2 seconds)
G4 P2
# = [# + 1]
O100 endwhile

(end)
M2

Running this code repeatedly in EMC2 subjects the chosen axis to worst case demands: accelerating from full speed one way to full speed the other as quickly as possible. This is a realistic benchmark for the worst case error that I'll see while I'm using the router, and it allows me to concentrate on twiddling trim pots and pondering the measurements.

I soldered short wires to the error and ground testpoints on the drive boards so that I could tune the drives with their covers on.

In order to tune an gecko servo drive properly you need an oscilloscope. The drives have an analog error output and you need to know how that varies over time in order to tune the drive and measure the maximum error. Anyone with their servo PID loop exposed to the EMC HAL can use the pure software Halscope, but that's no good for external hardware. I've just bought an MSO-19, which is seems to be a pretty reasonable mixed signal 'scope for an amazingly small $250. It does have a couple of shortcomings that I've found so far, though.

Firstly, the maximum timebase is 10ms which is slightly shorter than I would have liked for this job. There were times when I couldn't see the whole of the direction changeover period, which was about 120ms long for the X axis. The best I could do to work 'round that was set the trigger to detect the start of the increased error at the beginning of the reverse, examine the first screen full and then set a trigger hold off of 80ms or so that I could check a screen full starting 80ms after the trigger. This was good enough.

Secondly, the maximum voltage offset that I've been able to apply is 2V. The error signal is about 4V for no error, and varies by 0.04v for each encoder count of error, negative for lag, positive for lead. This means that I want to measure small voltage changes, but have to set a large voltage scale to keep the signal on the screen at all. This gives me insufficient precision on either the readings or (especially) the trigger setting.

The solution to this problem consists of a 3300uF cap, a 10K resistor, and a couple of crocodile clips:

(The negative side of the cap is attached to the resistor).

I connect the red wire to the error signal, and the black to the servo drive signal ground. It takes about a minute to charge the cap up, and then it blocks the DC part of the error signal.

I connect the scope probe between the cap and the resistor, and the the scope ground to the signal ground.

The error averages 0 over a few seconds, so now the short bursts of error as the axis starts, reverses, and stops, are measured as small variations around 0V, which is ideal.

What I Found

The first thing to tune was the top speed of the axis. I had calculated this to be 116mm/s from the spec of the servo motor (2800 rpm), the 2:1 step down on the belt, and the 5mm/rev ball screws. In practice the absolute maximum is between 105 and 110mm/s. Beyond that the servo can't keep up with the commands from the computer and the drive faults out. No acceleration setting that I tried allowed the servo to work at a higher top speed.

Having got an attainable max speed, I tuned the servo gain and damping to the maximum gain and minimum damping that didn't produce any ringing (ie: where the error was brought smoothly back to near zero with no overshoot).

Then I tuned the max speed and acceleration: For a chosen top speed I found the maximum acceleration that I could use while still seeing an acceptably small peak error.

When operating at 105mm/s I found that even with optimal drive tuning and a modest 300mm/s/s acceleration the error was unacceptably large: ~2.4V measured error signal maps to ~0.15mm following error. This is about 6 times the specified backlash on the ballscrews.

Reducing the top speed to 95mm/s gave a max error of only 0.05mm with an acceleration of 800mm/s/s. This improvement surprised me. Decreasing the top speed further doesn't seem to lead to any additional improvement. I couldn't see one at 85mm/s/s, anyway. Servo torque is supposed to be approximately constant with RPM, but this clearly isn't the case here. I have to guess that the max torque of my servo motors drops off quickly near their specified max RPM.

I tuned Y and Z by the same method, and found that they could support higher max accelerations for the same peak error (900 and 1350mm/s/s, respectively). This is to be expected since they are driving less inertia with the same torque.

No comments: