Notifications
Clear all

Big motor servo with AS5600

4 Posts
2 Users
1 Reactions
171 Views
(@poppa)
Member
Joined: 3 years ago
Posts: 4
Topic starter  

Hi folks,

I intend to build a big motor servo (24VDC worm gear motor) and check its position with an AS5600 angle sensor. The motor shall be used to control a flap in a machine. The output shaft shall only rotate for roughly 90° and return to its original position after it stood still for some time.

  • Flap closed is the home position and will always be the start.
  • Flap fully open is the maximum rotation angle.
  • Flap gradually open shall be possible. I thought about using a potentiometer for that
    pot max = flap 100% open
    pot min = flap 50% open
  • Flap movement shall be PID controlled

I prefer the AS5600 sensor because there is no mechanical connection to the motor. With its 12bit ADC is has a resolution better than 0.1°. So there is no need for gears an attached potentiometers.

I've got the AS5600 working and can control the motor to move to a desired position, but I am somewhat confused with the PID controller. In potentiometer controlled servos the potentiometer is constantly read in an interrupt. The AS5600 is connected through I2C. How can the PID function compare the actual reading with the desired position to calculate the error and to know if the position was reached?

I am somewhat confused on this topic.

Has anybody done something similar?

Any help is greatly appreciated.

Kindest regards from Hamburg, Germany.
Poppa


   
Quote
(@davee)
Member
Joined: 4 years ago
Posts: 1959
 

Hi @poppa,

Re: How can the PID function compare the actual reading with the desired position to calculate the error and to know if the position was reached?

I haven't built such a servo system, and sorry if I am misinterpreting your question, but in general terms...

--------------

The 'actual reading' is the value reported by the sensor ... e.g. AS5600 in your case.

Normally, the actual code 'style', such as whether an interrupt routine is involved, is an implementation detail that should not directly affect the PID. Instead, you ensure that the PID has access to the up-to-date situation sensor data. Thus details, like the use of interrupts, only becomes a concern, if it is a cause of the data fed to the PID being old and stale.

(Imagine trying to drive quickly along a narrow and twisty road, using a video image that is 10 seconds behind that of the actual situation!)

Of course, any sensing system will have some time delay, such as between the angle of magnetic field surrounding the AS5600 changing, and the corresponding new angle being sensed, calculated and presented to the PID code data input. The implementer must ensure that the delay is short enough to avoid it misleading the PID algorithm.

If a potentiometer was used instead of the AS5600, and the potentiometer position was determined by converting the potentiometer voltage output into a digital value, and this value then presented to the PID algorithm, then there would be also be delays.

So in all these cases, the designer/implementer should ensure the entire sensing and processing system is sufficiently fast for the application.

---

[The only 'exception' to having such delays, that comes to mind, being the 'traditional' PID implementation, using analogue electronics, with the P, I, D components being implemented with operational amplifiers, etc.  Even in this case, time is still a design concern, but it presents in the form of limited amplifier bandwidth, and so on.]

---------------

The 'desired position' is the corresponding data value from the 'commander'. In a manually driven system, this might be the position of a manually driven potentiometer, whilst in an automated machine system, it will be provided by the controlling computer. E.g. in something like a milling machine, the computer will be presenting a continually updated set of positions for the cutter to move to.

----------------

Sorry, I am not sure if this answers your question, and apologies if I have only discussed points you already understood. If this is the case, then please update your question, and hopefully someone will provide a better answer.

Best wishes, Dave


   
ReplyQuote
(@poppa)
Member
Joined: 3 years ago
Posts: 4
Topic starter  

Hi Dave,

 

thank you for your very well written explanation. The example with driving a car with a 10 second old video describes it quite clear.

Even though the AS5600 has got an internal 12 bit ADC, there are some calculations involved to get to the actual position angle. That will certainly take some time. The communication with the sensor is I2C, wich takes time as well. I am not sure if this leads to oscillation around the target point.

I read the actual position in a subroutine and have the angle calculated right before dumping the numbers to the PID routine. I want the system to be quick and responsive and not just crawl to the desired position. And certainly it shall not oscillate around the target. Maybe this can be achieved by tweaking the PID values. But you are right. The system can only be as good as the given boundaries allow.

Thank you again, Poppa


   
ReplyQuote
(@davee)
Member
Joined: 4 years ago
Posts: 1959
 

Hi @poppa,

  Thank you for your kind reply. I don't know, how long the A/D + calculations, etc. all take, but you might wish to do a few experiments to find out.

The slightly tricky bit, can be determining how to monitor the progress of parts of a programme, without inserting extra delays that are longer than the original code takes to execute ... a prime example being, writing to the 'Serial monitor', which involves running a considerable amount of library code for formatting, etc., plus the serial line handling is usually rate limiting.

If you have an oscilloscope, an old trick is to drive a spare GPIO output high immediately before the start of an operation, such as the A/D in your case, and low at the finish, say just after the result of that A/D operation has been 'posted' into the PID routine. With luck, each set-high/low operation will only involve a few processor instructions, and only add very short delays, of maybe a microsecond each. 

(You may also do some 'calibration tests', like a list of Set-high,Set-low, Set-High, Set-Low, ... instructions repeated, to check the timing overheads.)

It is necessary to be familiar with all the code to pick the best places to insert the set-high and set-low operations, so adjust this suggestion accordingly, and try different positions to get a fuller picture of the whole process.

I have already warned about the serial monitor. A possible, though maybe messy, workaround might be 'grab' the system clock time at two or more points in the programme and record the times in an array. Hopefully, these steps will be fairly short and quick. Then, having captured enough clock times, print a batch of them out, via the serial monitor (in 'slow' time). 

Of course, merely knowing how long the different elements in a programme are taking, does not directly solve any limitations and problems, but in general, even very experienced professional programmers, completely misjudge the amount of time the various sections of a programme take to execute, with the result they spend a lot of time and effort 'fine-tuning' a piece of code, in an attempt to reduce its execution time, with little beneficial effect on the overall programme, as they have omitted to improve the main culprit(s).

These are just suggestions, that you may already be aware of, but I hope they may be of use to you.

If you attempt any of the above, consider publishing a short note on the forum, describing what you did, and what you discovered, as others might find it helpful.

Good luck and best wishes, Dave


   
Poppa reacted
ReplyQuote