ROSDUCT通過rosbridge在本地公開遠程ROS主題、服務和參數

      網友投稿 818 2025-03-31

      軟件包:https://github.com/uts-magic-lab/rosduct

      歷史:

      在實驗室中,我們遇到了設置機器(新生、訪問學生、黑客游戲......)與ROS機器人進行通信的問題,網絡始終是一個問題。安裝ROS、設置ROS_MASTER_URI、ROS_IP、在同一個本地網絡或建立一個VPN或許多其他事情包括運行Windows、Mac、奇怪的Linux版本的機器...

      最簡單的解決方案是將ROS節點放入Docker容器中,以便它們可以從任何機器運行。由于不能公開所有端口,并使用與主機相同的IP(我們錯了,使用標志 - 網絡主機,你會做到這一點......?但它不適用于Mac!we were wrong, use the flag –net host and you’ll be doing exactly that… but it doesn’t work on Mac!)。所以我們采用了rosBridge的方式(因為我們所有的機器人都使用rosBridge)。

      說明:

      ROSduct,ROS消息的管道。ROSduct充當代理,通過rosbridge協議將ROS主題、服務和參數從遠程roscore公開到本地roscore。

      假設您的網絡中有一個啟用了ROS的機器人,并且您想與其通信,但是您的網絡配置不允許直接通信(例如,來自Docker容器內部)。通過使用ROSduct,您可以配置一組主題、服務和參數(動作服務器,因為它們在內部作為主題實施)在本地roscore中公開以透明地向機器人發送和接收ROS流量。

      順便說一下,該軟件包內部包含一個Python?1中的rosbridge客戶端實現。我們正在計劃給它更多的關注,并分別在PyPI中發布它。這有效地允許沒有安裝ROS的計算機通過Python通過ROS消息進行通信(沒有安裝這些消息)。

      你可以運行多個rosduct來分隔處理一組主題/服務的每一個。如果他們中的任何人處理發布快速或大量數據的主題,則推薦他們。您還可以將不同的機器人主題重新映射到本地的一個roscore中,并像處理共享相同的roscore一樣處理它們。

      請注意,通過rosbridge進行轉換意味著通過將事情轉換為JSON或從JSON轉換出來的開銷。我隨機嘗試了一些小主題,以達到500Hz,joint_states達到100Hz,TF 200Hz ......當然還有良好的網絡。你也有rosbridge引入的延遲。將rosduct視為一種便利工具

      讓我知道如果你覺得它有趣/有用!

      ----

      rosduct

      ROSduct,ROS消息的管道。ROSduct充當代理,通過rosbridge協議將ROS主題,服務和參數從遠端暴露roscore到本地。roscore

      假設您的網絡中有一個啟用了ROS的機器人,并且您想與其通信,但是您的網絡配置不允許直接通信(例如,來自Docker容器內部)。通過使用ROSduct,您可以配置一組主題,服務和參數(動作服務器,因為它們在內部作為主題實施)在本地roscore中公開以透明地向機器人發送和接收ROS流量。

      ROSDUCT:通過rosbridge在本地公開遠程ROS主題、服務和參數

      TODO:圖片解釋它。

      用法

      填寫YAML文件與您的主題發布者,訂閱者,服務服務器訪問,服務服務器公開和參數。也是ROSbridge websocket服務器的IP和端口。

      # ROSbridge websocket server info rosbridge_ip: 192.168.1.31 rosbridge_port: 9090 # Topics being published in the robot to expose locally remote_topics: [ ['/joint_states', 'sensor_msgs/JointState'], ['/tf', 'tf2_msgs/TFMessage'], ['/scan', 'sensor_msgs/LaserScan'] ] # Topics being published in the local roscore to expose remotely local_topics: [ ['/test1', 'std_msgs/String'], ['/closest_point', 'sensor_msgs/LaserScan'] ] # Services running in the robot to expose locally remote_services: [ ['/rosout/get_loggers', 'roscpp/GetLoggers'] ] # Services running locally to expose to the robot local_services: [ ['/add_two_ints', 'beginner_tutorials/AddTwoInts'] ] # Parameters to be sync, they will be polled to stay in sync parameters: ['/robot_description'] parameter_polling_hz: 1

      注意:不要將遠程或本地主題添加到主題中/rosout。

      Docker的示例用法

      這個工具主要是為了解決Docker容器的一個問題。如果您正在運行需要與ROS機器人進行雙向通信的Docker容器,并且您正在使用Linux,則可以將其添加--net host到您的docker run命令中(僅在運行后)。但是,如果您使用的是Mac,這將無法正常工作。要解決它,你可以使用這個包。

      只需進入你的Docker鏡像行列:

      mkdir -p ~/rosduct_ws/src cd ~/rosduct_ws/src git clone https://github.com/uts-magic-lab/rosduct cd .. catkin_make . devel/setup.bash

      制作一個啟動文件,將其配置為公開您需要的主題/服務。例如,對于與move_base您進行交互的工具而言,可能具有如下的啟動文件:

      # ROSbridge websocket server info rosbridge_ip: YOUR.ROBOT.IP.HERE rosbridge_port: 9090 # Topics being published in the robot to expose locally remote_topics: [ ['/amcl_pose', 'geometry_msgs/PoseWithCovarianceStamped'], ['/move_base/feedback', 'move_base_msgs/MoveBaseActionFeedback'], ['/move_base/status', 'actionlib_msgs/GoalStatusArray'], ['/move_base/result', 'move_base_msgs/MoveBaseActionResult'], ] # Topics being published in the local roscore to expose remotely local_topics: [ ['/move_base/goal', 'move_base_msgs/MoveBaseActionGoal'], ['/mover_base/cancel', 'actionlib_msgs/GoalID'] ] # Services running in the robot to expose locally remote_services: [ ['/move_base/clear_costmaps', 'std_srvs/Empty'] ] # Services running locally to expose to the robot local_services: [] # Parameters to be sync, they will be polled to stay in sync #parameters: [] #parameter_polling_hz: 1

      因此,您運行Docker鏡像公開端口9090(用于rosbridge通信),docker run -p 9090:9090 -it your_docker_image并在運行ROS節點之前運行先前的啟動文件。

      要構建配置,您可以執行rosnode info YOUR_NODE并檢查Publications(local_topics)和Subscriptions(remote_topics)和Services(local_services)。為了填充remote_services,你需要知道你的節點調用了哪些服務。

      ----

      Fiducial Marker Based Localization System

      Overview

      This package provides a system that allows a robot to determine its position and orientation by looking at a number of fiducial markers (similar to QR codes) that are fixed in the environment of the robot. Initially, the position of one marker is specified, or automatically determined. After that, a map (in the form of a file of 6DOF poses) is created by observing pairs of fiducial markers and determining the translation and rotation between them.

      How it works

      The Ubiquity Robotics localization system uses a number of fiducial markers?of known size to determine the robot's position. Detection of the markers?is done by the?aruco_detect node.?For each marker visible in the image, a set of vertices in image coordinates?is produced. Since the?intrinsic parameters?of the camera and the size?of the fiducial are known, the pose of the fiducial relative to the camera can?be estimated. Note that if the camera intrinsics are not known, they can?be determined using the process described in the?camera calibration tutorial.

      The diagram below shows the coordinate system of a fiducial marker, which has a?length of?2d. The image coordinates?(x, y)of each vertex correspond?to a ray from the camera. The pose estimation code solves a set of linear?equations to determine the world?(X, Y, Z)?coordinate of each of the?vertices. From this, we obtain the *transform* of the fiducial's coordinate?system to the camera's coordinate system?T_fid_cam. This?represents the *pose* of the marker in the camera's coordinate system. Since?we know the camera's pose in the coordinate system of the robot,c image.?In the diagram below, two fiducials,?fid1and?fid2?are shown. If?fid1?is at a known pose in the world,?T_map_fid1?and we know the?marker to camera transforms for both markers, we can compute the pose of?fid2?thus:

      T_map_fid2?=?T_map_fid1?*?T_cam_fid2?*?T_fid1_cam

      In this way, the map is built up as more fiducial pairs are observed.?multiple observations are combined with weighting, to produce an estimate of the 6DOF pose of each fiducial marker.

      Getting Started

      A camera is required, and it is necessary to know the position of the camera relative to the robot's?base_link. Software for the Raspberry Pi is available at:

      https://github.com/UbiquityRobotics/raspicam_node

      To install the fiducial software from binary packages:

      sudo apt-get install ros-kinetic-fiducials

      Fiducial markers can be generated with a command like this:

      rosrun aruco_detect create_markers.py 100 112 fiducials.pdf

      Once printed, they can be affixed to the environment. They don't need to be placed in any particular pattern but the density should be such that two or more markers can be seen by the camera on the robot, so the map can be built. Placing them on the ceiling reduces problems with occlusion, but is not required, since a full 6DOF pose is estimated.

      Two nodes should be run,?aruco_detect, which handles the detection of the fiducials, and?fiducial_slam, which combines the fiducial pose estimates and builds the map and makes an estimate of the robot's position. The map is in the form of a text file specifying the 6DOF pose of each of the markers, and is automatically saved.

      There are launch files for both of these nodes:

      roslaunch aruco_detect aruco_detect.launch roslaunch fiducial_slam fiducial_slam.launch

      A launch file is also provided to visualize the map in?rviz.

      roslaunch fiducial_slam fiducial_rviz.launch

      This will produce a display as shown below. The bottom left pane shows the?current camera view. This is useful for determining if the fiducial density?is sufficient. The right-hand pane shows the map of fiducials as it is being?built. Red cubes represent fiducials that are currently in view of the camera.?Green cubes represent fiducials that are in the map, but not currently?in the view of the camera. The blue lines show connected pairs of fiducials?that have been observed in the camera view at the same time. The robustness?of the map is increased by having a high degree of connectivity between the?fiducials.

      Automatic Map Initialization

      If the map is empty, then it will auto-initialize when a fiducial is visible. The auto-initialization calculates the pose of the nearest fiducial in the?map?frame, such that?base_link?of the robot is at the origin of?map. For best results, this should be done with the robot stationary.

      Clearing the Map

      The map can be cleared with the following command:

      rosservice call /fiducial_slam/clear_map

      After this command is issued, the map can be auto-initialized, as described above.

      We are excited to announce our fiducial based localization system?fiducials15.

      We love current LIDAR based localization methods, however they require expensive LIDAR for good results. LIDAR methods are also subject to the “kidnapped robot problem” which is the inability to unambiguously localize?ab-initio?in spaces which have a similar layout (e.g. if you move your robot to one of many similar offices it will get lost). Common LIDAR localization packages like amcl need to be initialized with a pose estimate on every run, something that can be difficult to do accurately. LIDAR based methods can also be difficult to tune and set up.

      Our fiducial localization system enables a robot with a camera to engage in robust unequivocal localization based on pre-placed fiducial markers. The node simultaneously maps and localizes with these markers, and is robust against movements of single fiducials. This robustness is due to the fact that it continuously recomputes both the map of fiducials and the error associated with each fiducial. It then computes the reliability of each fiducial based on the estimate error of each fiducial. The required sensor is inexpensive and the method is relatively simple to set up. We use the Raspberry Pi Camera V2 ($25), but any calibrated camera with a ROS driver will work.

      Here is a screenshot of rviz visualizing the fiducial map:

      media-20180226.jpg 1600x902 513 KB

      This localization method may be used stand-alone or it can be used as a compliment to more traditional LIDAR methods to create unambiguous localization at all times, using a system like?robot_localization2.

      For creating and detecting fiducial markers we use OpenCV’s ArUco module.

      More about operation and usage can be found on the?wiki page15

      Have an issue, or an idea for improvement? Open an issue or PR on the?GitHub repo7.

      This package will be part of the robots that we will release via crowdfunding on Indiegogo at 1 minute past midnight EST on March 10th 2018 (less than 2 weeks from now).

      The Ubiquity Robotics Team

      https://ubiquityrobotics.com

      ----

      Docker 網絡

      版權聲明:本文內容由網絡用戶投稿,版權歸原作者所有,本站不擁有其著作權,亦不承擔相應法律責任。如果您發現本站中有涉嫌抄襲或描述失實的內容,請聯系我們jiasou666@gmail.com 處理,核實后本網站將在24小時內刪除侵權內容。

      版權聲明:本文內容由網絡用戶投稿,版權歸原作者所有,本站不擁有其著作權,亦不承擔相應法律責任。如果您發現本站中有涉嫌抄襲或描述失實的內容,請聯系我們jiasou666@gmail.com 處理,核實后本網站將在24小時內刪除侵權內容。

      上一篇:高級塞選 怎么使用?(高級篩選怎么用)
      下一篇:人力資源管理(HRM),如何有效管理和發展組織的最重要資產
      相關文章
      2022年亚洲午夜一区二区福利| 亚洲国产精品白丝在线观看| 亚洲av无码国产精品色午夜字幕| jjzz亚洲亚洲女人| 亚洲国产精品网站在线播放| 中文字幕精品三区无码亚洲| 久久精品国产亚洲av麻豆蜜芽| 亚洲国产精品白丝在线观看| 亚洲国产成人超福利久久精品| 亚洲第一永久在线观看| 亚洲欧洲日本国产| 亚洲人成在线播放| 亚洲天堂2017无码中文| 91丁香亚洲综合社区| 亚洲中文字幕AV每天更新| 亚洲欧美成人av在线观看| 亚洲人成网站18禁止| 亚洲欧美国产欧美色欲| 亚洲Av永久无码精品黑人| 朝桐光亚洲专区在线中文字幕| 国产成人亚洲精品蜜芽影院| 国产亚洲视频在线观看| 亚洲色婷婷综合开心网| 黑人大战亚洲人精品一区| 亚洲日韩aⅴ在线视频| 亚洲日韩中文无码久久| 国产亚洲综合一区柠檬导航| 国产精品亚洲A∨天堂不卡| 午夜亚洲国产理论秋霞| 中文字幕在线观看亚洲| 亚洲人6666成人观看| 亚洲一卡2卡3卡4卡5卡6卡| 亚洲AV噜噜一区二区三区| 国产AV无码专区亚洲AV琪琪| 亚洲日本一区二区一本一道| 亚洲综合另类小说色区| 亚洲AV无码乱码国产麻豆穿越| 亚洲天堂中文字幕| 亚洲youjizz| 亚洲GV天堂无码男同在线观看| 亚洲国产av一区二区三区|