Oh no! Where's the JavaScript?
Your Web browser does not have JavaScript enabled or does not support JavaScript. Please enable JavaScript on your Web browser to properly view this Web site, or upgrade to a Web browser that does support JavaScript.

Fanuc CRX to Python integration for vision picking - help needed

Last updated on 2 days ago
K
KevinVeteran Member
Posted 2 days ago
Posted by robotics_dev_tyler

I'm working on a bin picking application with a CRX-10iA/L and need to get vision data from a Python script into the robot controller. We're using an external camera with OpenCV doing the object detection and pose estimation, but I'm stuck on the actual communication part. I know Fanuc has the PCDK (PC Developer Kit) but the documentation is pretty sparse and most examples are in C++. Has anyone gotten Python talking to a CRX reliably? I looked into socket communication but not sure if thats the right approach or if I should be using something else. The goal is to send XYZ coordinates and orientation from Python to the robot, have it pick the part, then request the next set of coordinates. Right now I'm just manually teaching points which obviously doesn't scale.
K
KevinVeteran Member
Posted 2 days ago
Reply by integration_specialist_mike
Tyler I just finished a similar project last month with a CRX-10iA. Socket communication is definitely doable and probably your easiest option. On the robot side you'll use the KAREL language to create a server that listens for incoming data. Here's the basic structure I used:
PROGRAM socket_server
VAR
 sock_fd : INTEGER
 status : INTEGER
 msg : STRING[254]
BEGIN
 MSG_CONNECT('S4', status)
 WRITE('Waiting for connection', CR)
 
 WHILE TRUE DO
 MSG_RECV(msg, status)
 IF status = 0 THEN
 -- Parse coordinates from msg
 WRITE('Received: ', msg, CR)
 ENDIF
 ENDWHILE
END socket_server


On the Python side just use regular socket library. The tricky part is the coordinate transformation from camera frame to robot base frame, you need a good hand-eye calibration. Are you doing eye-in-hand or eye-to-hand setup?
K
KevinVeteran Member
Posted 2 days ago
Reply by robotics_dev_tyler

Mike thanks, that KAREL snippet is helpful. We're doing eye-to-hand since the parts are in a bin that doesn't move. I've done the hand-eye calibration using OpenCV's calibrateHandEye function and I think the transformation matrix is correct but honestly not 100% sure. When I transform the detected coordinates and manually jog the robot there it's off by like 10-15mm which seems like a lot. Could be my calibration or could be camera distortion, hard to tell. For the socket communication, what port are you using? And do you have the Python client code you could share? I'm getting connection refused errors when I try to connect.
K
KevinVeteran Member
Posted 2 days ago
Reply by vision_engineer_sara

That 10-15mm error is pretty typical if your calibration isn't dialed in perfectly tbh. A few things to check - make sure you're accounting for camera distortion, use cv2.undistortPoints before doing any pose estimation. Also the checkerboard or calibration pattern needs to be REALLY flat, any warping will throw everything off. For the CRX specifically, make sure you're defining your tool frame correctly in the robot, the default TCP might not match your actual gripper. Here's a basic Python client I use:

import socket
import json

def send_pick_coords(x, y, z, rx, ry, rz):
 HOST = '192.168.1.10' # Robot IP
 PORT = 2000
 
 try:
 with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
 s.connect((HOST, PORT))
 data = {'x': x, 'y': y, 'z': z, 'rx': rx, 'ry': ry, 'rz': rz}
 s.sendall(json.dumps(data).encode())
 response = s.recv(1024)
 return response.decode()
 except Exception as e:
 print(f"Connection error: {e}")
 return None
K
KevinVeteran Member
Posted 2 days ago
Reply by integration_specialist_mike

Sara's code looks good. One thing I'll add - the CRX controllers can be finicky about the message format. I ended up using a simple delimiter approach instead of JSON because parsing JSON in KAREL is a pain. I just send comma separated values like "450.2,320.5,150.0,0,90,0" and parse it on the robot side. Also Tyler for debugging the coordinate transformation, try this: pick a single known point in your workspace, measure its actual coordinates in robot base frame, then see what your vision system reports for that same point after transformation. If they don't match within 2-3mm you need to redo calibration. Use at least 15-20 different robot poses during calibration, more is better.
K
KevinVeteran Member
Posted 2 days ago
Reply by robotics_dev_tyler

OK so I got the socket connection working, turned out I had the wrong IP address configured in the robot network settings, rookie mistake. Sara I'm going to try your undistort suggestion because I'm definitely not doing that right now. Mike the delimiter approach makes sense, I'll switch to that. New problem though - when I send coordinates and the robot tries to move, sometimes I get a MOTN-023 position error. I think it's because some of the calculated poses are unreachable due to joint limits. How do you guys handle that? Should I be doing reachability checking in Python before sending coordinates, or is there a way to catch that error on the robot side and request new coordinates?
K
KevinVeteran Member
Posted 2 days ago
Reply by fanuc_programmer_alex

MOTN-023 usually means you're either outside the workspace or asking for a pose that requires a joint to rotate beyond its limits. You should definitely do reachability checking before sending. If you have the FANUC Roboguide software you can export the robot's kinematic model and use something like ikpy or PyBullet in Python to verify solutions before sending them. Otherwise you can implement a basic workspace sphere check - the CRX-10iA/L has 1418mm reach so just make sure your target point isn't beyond that distance from the base. Also check your approach angle, the CRX has limited wrist rotation and if you're trying to pick at weird angles it'll fail. We typically add multiple approach angle candidates in our vision pipeline and try each one until we find one that works.
K
KevinVeteran Member
Posted 2 days ago
Reply by vision_engineer_sara
Alex is right about checking reachability first. Another thing that helped us was adding a "safety zone" check around the bin edges. If your vision system detects an object too close to the edge, skip it and go to the next one. Trying to pick parts at extreme angles is asking for trouble. Also make sure you're sending approach points not just the final pick point - like target point minus 100mm in Z, move there first, then move down to pick. Reduces the chance of collision with bin walls. One last thing, what gripper are you using? The default Fanuc grippers are pretty bulky for bin picking, we ended up going with a Robotiq 2F-85 which has better clearance.
You can view all discussion threads in this forum.
You cannot start a new discussion thread in this forum.
You cannot reply in this discussion thread.
You cannot start on a poll in this forum.
You cannot upload attachments in this forum.
You cannot download attachments in this forum.
Sign In
Not a member yet? Click here to register.
Forgot Password?
Users Online Now
Guests Online 12
Members Online 0

Total Members: 21
Newest Member: brijamohanjha