Научная статья на тему 'Visual dataflow language for educational robots programming'

Visual dataflow language for educational robots programming Текст научной статьи по специальности «Компьютерные и информационные науки»

CC BY
335
69
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
ROBOTICS / DATA FLOW / VISUAL PROGRAMMING / EDUCATIONAL ROBOTICS / DOMAIN-SPECIFIC MODELLING / SUBSUMPTION ARCHITECTURE / ПОТОКОВЫЕ ЯЗЫКИ / ПОТОКИ ДАННЫХ / ВИЗУАЛЬНОЕ ПРОГРАММИРОВАНИЕ / ОБРАЗОВАТЕЛЬНАЯ РОБОТОТЕХНИКА / ПРЕДМЕТНО-ОРИЕНТИРОВАННОЕ МОДЕЛИРОВАНИЕ / ПОВЕДЕНЧЕСКИЕ АРХИТЕКТУРЫ

Аннотация научной статьи по компьютерным и информационным наукам, автор научной работы — Zimin G.A., Mordvinov D.A.

Visual domain-specific languages usually have low entry barrier. Sometimes even children can program on such languages by working with visual representations. This is widely used in educational robotics domain, where most commonly used programming environments are visual. The paper describes a novel dataflow visual programming environment for embedded robotic platforms. Obviously, complex dataflow languages are not simple for understanding. The purpose of our tool is to "bridge" between lightweight educational robotic programming tools (commonly these tools provide languages which are based on control flow model) and complex industrial tools (which provide languages based on more complex dataflow execution model). We compare programming environments mostly used by robotics community with our tool. After brief review of behavioural robotic architectures, some thoughts on expressing them in terms of our dataflow language are given. Visual language, which is described here, provides opportunity to mix dataflow and control flow models for robotics programming. We believe that it is important for educational purposes. Program on our language consists of different blocks (visual representation of data transformation processes) and "links" which presents data flow between them. Domain-specific modelling approach was used to develop our language. Also, this paper provides the examples of solving two typical robot control tasks in our language.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «Visual dataflow language for educational robots programming»

Visual Dataflow Language for Educational Robots Programming

G.A. Zimin <zimin.grigory@gmail.com> D.A. Mordvinov <mordvinov.dmitry@gmail.com> St Petersburg State University, Institute of Mathematics and Mechanics, 28 University Ave., 198504, Russia

Abstract. Visual domain-specific languages usually have low entry barrier. Sometimes even children can program on such languages by working with visual representations. This is widely used in educational robotics domain, where most commonly used programming environments are visual. The paper describes a novel dataflow visual programming environment for embedded robotic platforms. Obviously, complex dataflow languages are not simple for understanding. The purpose of our tool is to "bridge" between lightweight educational robotic programming tools (commonly these tools provide languages which are based on control flow model) and complex industrial tools (which provide languages based on more complex dataflow execution model). We compare programming environments mostly used by robotics community with our tool. After brief review of behavioural robotic architectures, some thoughts on expressing them in terms of our dataflow language are given. Visual language, which is described here, provides opportunity to mix dataflow and control flow models for robotics programming. We believe that it is important for educational purposes. Program on our language consists of different blocks (visual representation of data transformation processes) and "links" which presents data flow between them. Domain-specific modelling approach was used to develop our language. Also, this paper provides the examples of solving two typical robot control tasks in our language.

Keywords: robotics, data flow, visual programming, educational robotics, domain-specific modelling, subsumption architecture.

DOI: 10.15514/ISPRAS-2016-282)-3

For citation: G.A. Zimin, D.A. Mordvinov. Visual Dataflow Language for Educational Robots Programming. Trudy ISP RAN/Proc. ISP RAS. vol. 28, issue 2, 2016, pp. 45-62 DOI: 10.15514/ISPRAS-2016-28(2)-3

1. Introduction

Programming languages for creating robotic controllers are actual topics of research oftenly discussed at major conferences, such as ICRA1 or IROS2. Visual programming languages (VPLs) are also actively discussed for the last three decades, the largest conferences are held annually, e.g. VL/HCC3. VPLs are oftenly applied in robotics domain [1-5] allowing to create and visualize robotic controllers. Robotic VPLs are commonly used for educational purposes, making possible for students of even junior schools to create robotic programs. For these aims there are already exists a great number of educational robotic programming environments based on VPLs, e.g. NXT-G4, TRIK Studio5, ROBOLAB6, also there are some academic tools implementing interesting and novel approaches to educational robotics programming [1], [3], [5]. Robotic control programs are inherently reactive: they transform data which is continuously coming from multiple sensors into the impulses on actuators. For this reason dataflow languages (DFLs) are well-suitable for robotics programming. Many researchers denoted the convenience of dataflow visual programming languages (DFVPLs) [6], finding them more useful than textual DFLs, for example because data flows explicitly displayed on the diagram. There are large and complex general-purpose and domain-specific development environments such as LabVIEW7 and Sim-ulink8 that provide a large (and sometimes even cumbersome) set of libraries for robotics programming. More detailed discussion of robotics VPLs will be provided in section 2.

There is a large number of robotic constructor kits for learning the basics of robotics and cybernetics, such as LEGO MINDSTORMS9, TRIK, ScratchDuino10. Modern programming languages, which are used for programming those kits, are based on the control flow model rather than on dataflow model. Control flow-based languages are good for solving scholar "toy" tasks, but may be inconvenient for programming more complex "real world" controllers that may be conveniently expresses on DFLs. The simple DFVPL may be considered as a useful step from educational VPLs to the programming languages, which are used in universities and industry. This paper discusses a novel extensible tool for programming all popular educational robotic kits on dataflow visual programming language. It should be noted that, in distinction from other tools, our tool is focused on embedded systems (section 6).

1IEEE International Conference on Robotics and Automation. Available: http://www.icra2016.org/

international Conference on Intelligent Robots and Systems. Available: http://www.iros2016.org/

3IEEE Symposium on Visual Languages and Human-Centric Computting. Available:

https://sites.google.com/site/vl- hcc2016/

4NXT-G quick programming guide. Available: http://www.legoengineering.com/nxt-g-quick-guide/

5All about TRIK: TRIK Studio. Available: http://blog.trikset.com/p/trik-studio.html

6ROBOLAB quick guide. Available: http://www.legoengineering.com/robolab-quick-guide/

7LabVIEW System Design Software - National Instruments. Available: http://www.ni.com/labview/

8Simulink - Simulation and Model-Based Design. Available:

http://www.mathworks.com/products/ simulink/

9MINDSTORMS EV3 - Products. Available: http://www.lego.com/en-us/mindstorms/products/

10ScratchDuino — Magnetic Robot Construction Kit. Available: http://www.scratchduino.com/

Another interesting detail of our work is the application of DSM-aproach for implementation of visual editor: it is entirely generated by QReal DSM-platform [7], [8] without even a line of code written. We also take into consideration the popularity of Brooks' Subsumption Architecture [9] which is still mainstream approach to design of complex robotic controllers [1], [2], [4], [10] despite it was proposed 30 years ago. Brooks' Subsumption Architecture and some other are conveniently expressed in our language, they are discussed in section 3.

The remainder of a paper is organized as follows. An overview of robotics VPLs and DFVPLs is presented in section 2. Section 3 provides some general thoughts on how some widely used robotic behavioural architectures are expressed in our language. A detailed description of our language is given in section 4. Section 5 demonstrates two typical robotic controllers expressed in our language. The most important details of implementation are discussed in section 6. Finally, the last section concludes the paper and discusses possible directions for future work.

2. Similar Tools

Robot programming environments can be divided into three categories: educational, which allows to program small educational robotic kits; industrial, which have a rich toolkit for creating large and complex robotic controllers; academic, which implement new interesting ideas, however they are often unavailable for downloading or unusable.

Educational visual environments are for example NXT-G and ROBOLAB for LEGO MINDSTORMS NXT kit, EV3 Software for the Lego Mindstorms EV3 kit, TRIK Studio for NXT, EV3 and TRIK. Those environments simplify solving primitive robot control tasks like finding a way out of the maze and driving along the line using light sensors, which makes the process of learning the basics of programming and robot control easy. But their simplicity often bounds the flexibility of the language. Visual languages of all mentioned systems are based on control flow model. There is also a number of well-known visual robotic programming environments of industrial level. For example, general-purpose LabVIEW from National Instruments with the DFVPL G, programming environment Simulink developed by MathWorks for modelling different dynamic models or control systems. Those products offer a huge set of models and libraries to create control systems, test benches, real-time systems of any complexity, using model-driven approach. LabVIEW provides opportunity for programming small robots. There are lots of examples of applying LabVIEW in education [11], [12], but much more often adaptations like Robolab are used in educational process. It should be noted that those environments are distributed under the commercial license.

Another example of an visual robotics industrial system is the Microsoft Robotics Developer Studio (MSRDS) [13], which is free for academic purposes and allow to create distributed robotic systems on DFVPL. MSRDS officially supports a large set of robotic platforms, LEGO NXT [14] in particular (however, the autonomous mode

for NXT is not supported). MSRDS has the ability of manual integration with custom robotic platforms, but unhappily is not maintained since 2014. There is a lot of scientific research has done in this area, e.g., dissertation [1] describes a visual programming module for expressing robotic controllers in terms of extended Moore machines, [3], [4] describe visual environment for occam-n language and Transterpreter framework, and its usage in education and swarm robotics. Article [5] describes DFVPL for beginners, which is pretty close to a one we introduce here. However at the moment RuRu is under development, it has pretty limited functionality and even unavailable for download.

3. Robotic Behavioural Architectures

The task of creation complex and scalable robotic controller is indeed a non-trivial task. Starting from middle 80's many researchers have attempted to solve this problem and a number of behavioural robotic architectures were proposed [18]. Those approaches are quickly became popular in robotics community and they are still actual. For example, the original work that introduced Brooks' Subsumption Architecture [9] is one of the most cited works in the entire robotics domain. We believe that the description of modern language for programming robotic controllers should contain at least general thoughts on how those architectures may be expressed in it. A controller built on Brooks' Subsumption Architecture is decomposed into a hierarchy of levels of competence where each new layer describes a new feature of robot's behaviour. Levels are "ordered" upside-down, the higher levels describe more "intelligent" behaviour of robot. Higher levels depend on lower ones but not vice versa, so failures of higher levels do not imply the failure of lower. This is important feature for mobile robotics, e.g. if robot's gripper was damaged the controller is still able to deliver robot to its base. Levels of responsibility are expressed as a set of "behaviours" running concurrently and interacting with each other via channels of suppression and inhibition. Using them, higher levels can suppress the activity of lower ones thus correcting the behaviour of the whole system.

Brooks' in his original work offered to express behaviours in terms of state machines. Each layer implements some simple logic of transformation sensor inputs into impulses on actuators. Dataflow languages are obviously as suitable as state machines for expressing such behaviours. In our language each behaviour can be represented as "black box" described by separate subprogram. Also, our language contains Suppressor and Inhibitor elements for layers communication. Levels can be invoked concurrently, so we can conclude that our language allows the convenient expression of controllers built with Subsumption Architecture. That is demonstrated by an example in section 5.

Connell's Colony Architecture [15] is a very similar to Brooks' one, but solves some scalability issues of Subsumption Architecture. It also decomposes the controller into a number of communicating concurrent levels, but they are unordered. The other difference is an absence of inhibition channel, data inhibition should be implicitly ex-

pressed by predicated in layers. Our language does not force any order between layers, predicative inhibition can be implemented simply with Filter block. So Colony Architecture is also well-expressed in our language.

There also exist Arkin's Motor Schema [16] and Rosenblatt's Distributed Architecture for Mobile Navigation (DAMN) [17] which are compatible with our language, but the detailed descriptions will be omitted here. General ideas on their implementation on occam-n language can be found in [18], we believe that those ideas will suffice in the context of this paper. The complete research of expressing behavioural architectures in our language is a topic for separate paper.

4. Language Description

Evolution of a domain-specific modeling (DSM) tools allows to quickly create a fairly sophisticated visual programming languages [19]. TRIK Studio programming environment is an example of a system that was created using DSM-based approach on QReal platform [7], [8]. Basing on an industrial experience of TRIK Studio developers we decided to create the visual editor of our language on QReal platform. Program on DFVPL is a set of blocks and flows that connect blocks. DFVPL blocks process incoming tokens and emit resulting data into the output data flows. Blocks in our language can be divided into several groups that are described below. Some blocks require to specify information on textual language. The language we use is a statically typed dialect of Lua.

• Control blocks that implement basic algorithmic constructions (conditions, loops, etc).

◦ ConstValue and RandomValue blocks that are responsible for generation of a random number or a predetermined value of any type.

◦ Loop, If, Switch. These blocks implement general control flow algorithmic constructions in dataflow style. Loop is an entity which emits a sequence of numbers for a given amount of times. If checks the condition

specified on a textual language and sends them to True or False channel. Switch successively checks guard conditions and if it is evaluated

as true sends incoming data to corresponding channel.

◦ Function block, which allows to process of the input data in a textual language. Most usually this block is used for mathematical processing of data.

◦ FinalBlock stops the execution of program when receiving any data.

◦ Subprogram for reusing the code. Double-click on subprogram block opens new visual editor tab with an implementation of this subprogram. Contents of that tab can be then edited by user in exactly the same way he edits the main diagram.

◦ GetSetVariable - purely practical block for setting value of some global variable or emitting it into output flows.

◦ Wait block delays data processing.

◦ DelayAndFilter is the extension of the previous block adding the filtering condition and checking the amount of emitted data validated by condition.

◦ Fork, EndFork blocks that provide an ability of invoking code in platform-specific execution units. See section 6 for details.

Drawing. Blocks for drawing on display of the robot and on the floor in

simulator mode.

◦ PaintSettings defines current background color, thickness and color of pen and color and style of the brush that draw graphical primitives.

◦ ShapePainter, SmilePainter, Text are used for drawing some shape, text or smile on robot's display.

◦ Clear block removes all graphics from robot's display when receiving any token.

◦ Pen block puts down or raises the marker for drawing the robot's trace on the "floor" of 2D simulator.

Flow manipulation. These elements provide opportunity to manipulate data,

which flow between blocks.

◦ InPort, OutPort emit tokens that come into some instance of Subprogram block into a diagram implementing it and similarly redirect data from subprogram implementation into output flows of active instance of Subprogram block.

◦ Supressor, Inhibitor inhibit or replace token of some flow with tokens of another. These, Subprogram and Fork blocks provide a compatibility with the Brooks' Subsumption Architecture.

◦ Zip, Unzip provide an opportunity to gather data from several Flows into one and vice versa.

Actions provide an ability to query and modify state of robot's input and output devices.

◦ Sensor continuously emits data from specified sensor, e.g. infrared, light, etc.

◦ Servo, Motors process received data and send impulses to robot actuators.

◦ Encoders block sets the motors tacho limit when receiving data and continuously emits encoder values into output flows.

◦ SendMessage, ReceiveMessage responsible for the coordination of a group of robots.

◦ Say, PlayTone, LED responsible for managing speakers and LED lights.

◦ RemoveFile, WriteToFile, ReadFile implement working with file system.

◦ InitCamera, DetectByVideo, StreamingNode wrap some algorithms of computer vision.

◦ PortBlock provides an ability to write low-level to some port of the robot.

◦ SystemCall responsible for the command execution by command line interpreter, e.g. token "reboot" will reboot robot.

◦ Gamepad reads data from the operator's control device, e.g. gamepad, and emits it.

These blocks are enough to express a pretty wide range of the robotic controllers of varying complexity. If several blocks emitting data from one input device are met only one of them is active. That detail distinguishes our tool from other implementing data flow paradigm, for details see section 6. For example, figure 1 shows diagram with Motors, ConstValue, Encoders, Flows where Encoders block is presented twice. When interpretation started ConstValue emits data to Motors and Encoders (a) emits a value of a tacho counter. When block Encoders (b) receives some data and thus nullifies encoder value, at that moment Encoders (a) stops emitting tokens.

Fig. 1. Block with many representations but only one of them can be active. a,b - Encoders,

c - ConstValue, d - Motors.

shows diagram with Motors, ConstValue, Encoders, Flows where Encoders block is presented twice. When interpretation started ConstValue emits data to Motors and Encoders (a) emits a value of a tacho counter. When block Encoders (b) receives some data and thus nullifies encoder value, at that moment Encoders (a) stops emitting tokens.

One important detail about our language is that it explicitly supports control flow model, that is important for educational goals. On figure 1 ConstValue and Motors have incoming and outgoing "arrows", which are used to connect control flow data. For example Motors block emits data to control flow channel when handle incoming data and ConstValue emits its value when receives control flow token.

Flows may be pinned to a block on left, right and bottom side, which are highlighted when user edits block (see Fig. 2). Also block may contain text fields, e.g. on figure 2 user entered textual condition.

Fig. 2. Showing and editing of block.

5. Example

Figures 3, 4 show simple PD-regulator which keeps robot on a certain distance from a wall using infrared sensor.

£ il- 2 i"ivlC.»-S"|iOicnOl 1OÜ-1O

»tu ——

_- —1

SHi

Fig. 3. Controller for the wall following.

Global variable is used for storing old sensor values. Expressions in Function block are calculated in upside-down order, results of previous expressions are available on lower levels. Each level emits resulting token into a corresponding flow, in our example two flows are connected directly to motors control block.

Fig. 4. Simulation process of the wall following.

Let's describe more complex robotic controller. We have the robot equipped with two power motors and two frontal infrared sensors positioned at an angle of 30 degrees on either side of the longitudinal line of symmetry of the robot. Let's consider the robot control system that manages robot wandering in space and avoiding frontal collisions. But at the same time it allows manual control with gamepad. We divide the problem into three levels responsibility using Subsumption Architecture. The first will be responsible for aimless movement of the robot. The second is responsible for collision avoidance: if the robot is too close to a collision, it must avoid the obstacles preventing robot wandering. The third will be responsible for maintenance of the user queries, the user obtains a full control, the previous levels are suppressed. Figure 5 shows this decomposition. Each level represented as Subprogram and emits pulses to actuators. Execution begins with the launch of all levels concurrently. Robot wanders aimlessly. If the robot is close to the collision, the Collision avoidance level suppresses the flow with data emitted by Wandering level. If the user starts to manipulate with the gamepad, the data sent suppress levels described above.

Fig. 5. Controller code with three competencies level. 1 - Human control level. 2 - Collision avoidance level. 3 - Wandering level. 4 - Supressor block for levels 2,3. 5 - Supressor block for levels 1 and 2,3. 6 - Unzip block. 7 - Motors block.

Each level is the simple robot controller without direct connection to actuators. Wandering (first level) continuously generates random number for each robot actuator, and sends its outside as array (see Fig. 6). The execution of this level starts with InPort which emits data to activate two RandomValue blocks. Each RandomValue generate random number and emits it to Wait block which after some predefined delay sends it to Zip block which produces an array storing output values. The second level is needed to prevent collisions (see Fig. 7). It continuously gathers data by Zip from two infrared Sensors and checks if collision threatens (continuously after some delay by DelayAndFilter). If the collision can occur values sent for actuators to evade obstacles are calculated by Function. Function block emits it to Zip block, which produces an array storing output values.

The third level is responsible for gamepad control (see Fig. 8). Gamepad emits tokens describing current joystick and buttons state. For simplicity, we assume that pressing any button on gamepad will terminate the robot control program (by FinalBlock). The tokens are converted from the Gamepad to the array of pulses for actuators by Function block, which emits it through OutPort block.

Fig. 6. Walking. 1 - InPort block. 2,3 - RandomValue blocks. 4,5 - Wait blocks. 6 - Zip

block. 7 - OutPort block.

Fig. 7. Collision avoidance. 1,2 - Sensor blocks. 3,6 - Zip block. 4 - DelayAndFilter block. 5 - Function block. 7 - OutPort block.Human control. 1 - Gamepad block. 2 - FinalBlock. 3 - Function block. 4 - OutPort block.

Fig. 8. The third level.

6. Implementation

The system is implemented as two plugins for TRIK Studio. The first one describes the visual language and provides visual editor for our system. It contains the metamodel of dataflow visual language and entirely generated by QReal DSM-platform. Plugged into TRIK Studio this module provides fully operational visual editor with all advantages of TRIK Studio control flow editor like modern-looking user interface, ability to create elements with mouse gestures, different appearances of links and so on. The time spent on the development of this plugin (not considering discussing and designing the prototype of visual language on paper) roughly equals three man-days. The benefit on exploiting the DSM-approach is obvious, the development of the similar editor from scratch would have been taken vastly more time. The second plugin contains implementation of dataflow diagrams interpreter. Interpreter will transform given program, which is drawn in editor (provided by first plugin) into a sequence of the commands sent to a target robot (see Fig. 9). The target robot can be one of the supported in TRIK Studio infrastructure: Lego NXT or EV3 robot, TRIK robot, TRIK Studio 2D simulator or V-REP 3D simulator [20]. Commands are sent via high-level TRIK Studio devices API, a part of it presented at Fig. 10.

Fig. 9. The general architecture of the system.

Fig. 10. Partial architecture of devices used in dataflow interpreter.

The general architecture of interpreter plugin is presented at Fig. 11. Interpreter traverses given dataflow diagram, validates and prepares it for interpretation process.

For each visited dataflow block implementation object is instantiated. Implementation objects are written in C++. Instantiation is performed by corresponding factory object. Implementation objects are then subscribed each to other like they are connected by flows on diagram, publish-subscribe pattern is used here. The set of initial blocks is determined next, those are blocks without incoming flows. After all that done preparation phase is complete and diagram starts being interpreted.

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

Fig. 11. The general architecture of dataflow interpreter plugin.

Interpretation process is not as straightforward as in most asynchronous dataflow environments. Usually components of dataflow diagram are executed concurrently, on different threads, processes or even machines (that is actively exploited, for example, by Microsoft Robotics Developer Studio where dataflow diagram is deployed into a number of web-services). That is a pretty convenient way to invoke dataflow diagrams on a powerful hardware, but not a case when we talk about embedded devices. In our case we deal exactly with embedded devices (Lego NXT, EV3, TRIK, Arduino controllers), so we propose here another way of executing dataflow diagrams. The main idea is to introduce global message queue and event loop for messages processing. When token is published by some block it is enqueued into messages queue and waits for its turn to be delivered to subscribers (Fig. 12). In fact thus we flatten the execution, convert concurrent way of dataflow interpretation to a pseudo-concurrent one where we schedule invocation order on our own. It must be noted that this mechanism is similar to events propagation system of Qt framework. That is actively exploited in our implementation, where message processing is completely performed by QEventLoop class and tokens delivering is done by Qt signal/slot system in QueuedConnection mode.

Fig. 12. Proposed mechanism of pseudo-concurrent dataflow interpretation.

Flat execution of dataflow diagram poses a number of small problems, one of them will be discussed here. Input device blocks (for example blocks publishing tokens from ultrasonic sensors) are constantly emitting tokens to subscribers. Subscribers transmit tokens to a next one (possibly in modified state) and so on. Thus there appears a chain of data processing. In our language that chain can activate control flow ports of blocks "reviving" them, so the control flow model is implicitly supported in our language (this is important in educational reasons). If later in this chain same input device block will be met then execution will come in a counter-intuitive way. Such conflicts are ruled out with a simple heuristic that among all the blocks sharing one physical device only one can be active and that is the last activated one. Thus when the execution token comes into some device block it immediately "deactivates" conflicting ones. Other problems like messages balancing (in case when some block "flooding" the whole messages queue) will not be discussed here. The last thing we should remark here is the presence of Fork block in our language that usually is not provided by dataflow languages. Flattened model seems to work well on embedded devices, but sometimes users still need to use concurrent execution (for example for executing layers in Subsumption architecture). For that reason Fork block is introduced, it forks the execution into a number of platform-specific execution units (for example pthreads on UNIX or tasks on NXT OSEK). This block can be regarded as low-level control of execution process. It should be also marked that this block almost has no sense in interpretation mode (because execution itself is performed on desktop machine with only sending primitive commands to robot), but will be very useful in future works when autonomous mode will be introduced.

7. Conclusion and Discussion

In this work, we presented the prototype of dataflow language for programming different robotic kits (LEGO MINDSTORMS NXT, LEGO MINDSTORMS EV3, TRIK). The system provides ability to interpret diagrams on 2D- an 3D-simulators and real robotic devices. Here, we also propose an approach for executing dataflow

(u»ni tflop

PuhJrthdr2

SulKcribdfl

diagrams on embedded devices. The language implicitly supports control flow model for educational purposes. It is also convenient for expressing typical robotic controllers architectures which is demonstrated on example.

The implemented system can be regarded as a platform for future investigations. First of all, autonomous mode of work will be implemented. That will be done through code generation into a number of textual languages already supported by TRIK Studio (NXT OSEK C for Lego, bytecode for EV3, JavaScript, F# [21] and Kotlin for TRIK). We are also interested in academical research. First of all a formal semantics of our language should be expressed for applying various formal methods of program analysis. Another branch of research will be directed into a DSM-branch, here we want to consider an ability of dynamic language metamodel generation from specifications of available modules of robotics middleware (like ROS [22] or Player [23]).

References

[1]. Banyasad, O. (2000). A Visual Programming Environment for Autonomous Robots.

[2]. Simpson, J., Jacobsen, C. L., & Jadud, M. C. (2006). Mobile robot control. Communicating Process Architectures, 225.

[3]. Simpson, J., & Jacobsen, C. L. (2008, September). Visual Process-Oriented Programming for Robotics. In CPA (pp. 365-380).

[4]. Posso, J. C., Sampson, A. T., Simpson, J., & Timmis, J. (2011). Process-Oriented Sub-sumption Architectures in Swarm Robotic Systems. In CPA (pp. 303-316).

[5]. Diprose, J. P., MacDonald, B. A., & Hosking, J. G. (2011, September). Ruru: A spatial and interactive visual programming language for novice robot programming. In Visual Languages and Human-Centric Computing (VL/HCC), 2011 IEEE Symposium on (pp. 2532). IEEE.

[6]. Johnston, W. M., Hanna, J. R., & Millar, R. J. (2004). Advances in dataflow programming languages. ACM Computing Surveys (CSUR), 36(1), 1-34.

[7]. A.S. Kuzenkova, A.O. Deripaska, K.S. Taran, A.V. Podkopaev, Y.V. Litvinov, T.A. Bryksin. [Tools for fast development of domain-specific solutions in metaCASE-platform Qreal] St. Petersburg State Polytechnical University Journal, p. 142, 2011 (in Russian).

[8]. Kuzenkova A., Deripaska A., Bryksin T., Litvinov Y., Polyakov V. QReal DSM platform-An Environment for Creation of Specific Visual IDEs. InEAASE (pp. 205-211) 2013.

[9]. Brooks, R. A. (1986). A robust layered control system for a mobile robot.Robotics and Automation, IEEE Journal of, 2(1), 14-23.

[10]. Proetzsch, Martin, Tobias Luksch, and Karsten Berns. "The behaviour-based control architecture iB2C for complex robotic systems." KI 2007: Advances in Artificial Intelligence. Springer Berlin Heidelberg, 2007. 494-497.

[11]. Erwin, B., Cyr, M., & Rogers, C. (2000). Lego engineer and robolab: Teaching engineering with labview from kindergarten to graduate school. International Journal of Engineering Education, 16(3), 181-192.

[12]. Gomez-de-Gabriel, J. M., Mandow, A., Fernandez-Lozano, J., & García-Cerezo, A. (2011). Using LEGO NXT mobile robots with LabVIEW for undergraduate courses on mechatronics. Education, IEEE Transactions on, 54(1), 41-47.

[13]. Kuzenkova, A., Deripaska, A., Bryksin, T., Litvinov, Y., & Polyakov, V. (2013). QReal DSM platform-An Environment for Creation of Specific Visual IDEs. InENASE (pp. 205211)

[14]. Kim, S. H., & Jeon, J. W. (2007, October). Programming LEGO Mindstorms NXT with visual programming. In Control, Automation and Systems, 2007. ICCAS'07. International Conference on (pp. 2468-2472). IEEE.

[15]. Connell, Jonathan H. A colony architecture for an artificial creature. No. AI-TR-1151. MASSACHUSETTS INST OF TECH CAMBRIDGE ARTIFICIAL INTELLIGENCE LAB, 1989.

[16]. Arkin, Ronald C. Motor schema based navigation for a mobile robot: An approach to programming by behavior. Robotics and Automation. Proceedings. 1987 IEEE International Conference on. Vol. 4. IEEE, 1987.

[17]. Rosenblatt, Julio K. DAMN: A distributed architecture for mobile navigation. Journal of Experimental & Theoretical Artificial Intelligence 9.2-3 (1997): 339-360.

[18]. Simpson, Jonathan, and Carl G. Ritson. Toward Process Architectures for Behavioural Robotics. CPA. 2009.

[19]. D.V. Koznov. [Fundamentals of Visual Modeling] Binom. Laboratorija znanij, Internet-universitet informacionnyh tehnologij. 2008 (in Russian).

[20]. Rohmer, Eric, Surya PN Singh, and Marc Freese. V-REP: A versatile and scalable robot simulation framework. Intelligent Robots and Systems (IROS), 2013 IEEE/RSJ International Conference on. IEEE, 2013.

[21]. Kirsanov, Alexander, Iakov Kirilenko, and Kirill Melentyev. Robotics reactive programming with F#/Mono. Proceedings of the 10th Central and Eastern European Software Engineering Conference in Russia. ACM, 2014.

[22]. Quigley, Morgan, et al. ROS: an open-source Robot Operating System. ICRA workshop on open source software. Vol. 3. No. 3.2. 2009.

[23]. Gerkey, Brian, Richard T. Vaughan, and Andrew Howard. The player/stage project: Tools for multi-robot and distributed sensor systems. Proceedings of the 11th international conference on advanced robotics. Vol. 1. 2003.

Образовательный визуальный потоковый язык для программирования роботов

Г.А. Зимин <zimin.grigory@gmail.com> Д.А. Мордвинов <mordvinov.dmitry@gmail.com> Санкт-Петербургский государственный университет, математико-механический факультет, Университетский пр-т. 28, 198504, Россия.

Abstract. Визуальные предметно-ориентированные языки зачастую имеют низкий порог вхождения: даже ученики школ и дошкольных учреждений могут программировать на таких языках, оперируя визуальными моделями. Этот факт нашел широкое применение в образовательной робототехнике, где большинство используемых сред разработки основано на визуальных языках. Данная работа описывает новый потоковый визуальный язык программирования роботов для распространенных встраиваемых робототехниче-ских платформ. Очевидно, что сложные потоковые визуальные языки трудны для пони-

мания. Целью нашей работы было создание инструмента, представляющего собой переходную «ступень» между легковесными образовательными средами программирования, которые обычно предоставляют языки, основанные на модели потока управления, и сложными индустриальными средами, которые, в основном, предоставляют языки, основанные на модели потоков данных. В статье приводится сравнение широко распространенных сред программирования роботов с описанной в работе средой. Также в работе представлен краткий обзор популярных поведенческих архитектур для построения сложных систем управления роботами, таких как архитектура категорий Р. Брукса и «Колония» Д. Коннеля, и приведены идеи их выражения в новом языке программирования. Язык был создан с помощью предметно-ориентированного подхода. Он предоставляет возможность совмещать в себе две модели исполнения: пользователь может программировать как в терминах потоков данных, так и в терминах потока управления. Мы считаем, что это важно в образовательных целях. Программы на нашем языке состоят из множества «блоков» - визуальных представлений процессов трансформации данных, и «связей», которые визуализируют потоки данных между ними. В качестве апробации среды созданы различные по сложности программы управления роботами.

Keywords: потоковые языки, потоки данных, визуальное программирование, образовательная робототехника, предметно-ориентированное моделирование, поведенческие архитектуры.

DOI: 10.15514/ISPRAS-2016-28(2)-3

Для цитирования: Зимин Г.А., Мордвинов Д.А. Образовательный визуальный потоковый язык для программирования роботов. Труды ИСП РАН, том 28, вып. 2, 2016 г., стр. 45-62 (на английском). DOI: 10.15514/ISPRAS-2016-28(2)-3

Список литературы

[1]. Banyasad, O. (2000). A Visual Programming Environment for Autonomous Robots.

[2]. Simpson, J., Jacobsen, C. L., & Jadud, M. C. (2006). Mobile robot control. Communicating Process Architectures, 225.

[3]. Simpson, J., & Jacobsen, C. L. (2008, September). Visual Process-Oriented Programming for Robotics. In CPA (pp. 365-380).

[4]. Posso, J. C., Sampson, A. T., Simpson, J., & Timmis, J. (2011). Process-Oriented Sub-sumption Architectures in Swarm Robotic Systems. In CPA (pp. 303-316).

[5]. Diprose, J. P., MacDonald, B. A., & Hosking, J. G. (2011, September). Ruru: A spatial and interactive visual programming language for novice robot programming. In Visual Languages and Human-Centric Computing (VL/HCC), 2011 IEEE Symposium on (pp. 2532). IEEE.

[6]. Johnston, W. M., Hanna, J. R., & Millar, R. J. (2004). Advances in dataflow programming languages. ACM Computing Surveys (CSUR), 36(1), 1-34.

[7]. Кузенкова А.С., Дерипаска А.О., Таран К.С., Подкопаев А.В., Литвинов Ю.В., Брыксин Т.А. Средства быстрой разработки предметно-ориентированных решений в metaCASE-средстве QReal. Научно-технические ведомости СПбГПУ, 142

[8]. Kuzenkova A., Deripaska A., Bryksin T., Litvinov Y., Polyakov V. QReal DSM platform-An Environment for Creation of Specific Visual IDEs. InEAASE (pp. 205-211) 2013.

[9]. Brooks, R. A. (1986). A robust layered control system for a mobile robot.Robotics and Automation, IEEE Journal of, 2(1), 14-23.

[10]. Proetzsch, Martin, Tobias Luksch, and Karsten Berns. "The behaviour-based control architecture iB2C for complex robotic systems." KI 2007: Advances in Artificial Intelligence. Springer Berlin Heidelberg, 2007. 494-497.

[11]. Erwin, B., Cyr, M., & Rogers, C. (2000). Lego engineer and robolab: Teaching engineering with labview from kindergarten to graduate school. International Journal of Engineering Education, 16(3), 181-192.

[12]. Gomez-de-Gabriel, J. M., Mandow, A., Fernandez-Lozano, J., & García-Cerezo, A. (2011). Using LEGO NXT mobile robots with LabVIEW for undergraduate courses on mechatronics. Education, IEEE Transactions on, 54(1), 41-47.

[13]. Kuzenkova, A., Deripaska, A., Bryksin, T., Litvinov, Y., & Polyakov, V. (2013). QReal DSM platform-An Environment for Creation of Specific Visual IDEs. In ENASE (pp. 205211)

[14]. Kim, S. H., & Jeon, J. W. (2007, October). Programming LEGO Mindstorms NXT with visual programming. In Control, Automation and Systems, 2007. ICCAS'07. International Conference on (pp. 2468-2472). IEEE.

[15]. Connell, Jonathan H. A colony architecture for an artificial creature. No. AI-TR-1151. MASSACHUSETTS INST OF TECH CAMBRIDGE ARTIFICIAL INTELLIGENCE LAB, 1989.

[16]. Arkin, Ronald C. Motor schema based navigation for a mobile robot: An approach to programming by behavior. Robotics and Automation. Proceedings. 1987 IEEE International Conference on. Vol. 4. IEEE, 1987.

[17]. Rosenblatt, Julio K. DAMN: A distributed architecture for mobile navigation. Journal of Experimental & Theoretical Artificial Intelligence 9.2-3 (1997): 339-360.

[18]. Simpson, Jonathan, and Carl G. Ritson. Toward Process Architectures for Behavioural Robotics. CPA. 2009.

[19]. Кознов, Дмитрий Владимирович. Основы визуального моделирования. М.: Изд-во Интернет университета информационных технологий, ИНТУИТ.ру, БИНОМ, Лаборатория знаний. 2008.

[20]. Rohmer, Eric, Surya PN Singh, and Marc Freese. V-REP: A versatile and scalable robot simulation framework. Intelligent Robots and Systems (IROS), 2013 IEEE/RSJ International Conference on. IEEE, 2013.

[21]. Kirsanov, Alexander, Iakov Kirilenko, and Kirill Melentyev. Robotics reactive programming with F#/Mono. Proceedings of the 10th Central and Eastern European Software Engineering Conference in Russia. ACM, 2014.

[22]. Quigley, Morgan, et al. ROS: an open-source Robot Operating System. ICRA workshop on open source software. Vol. 3. No. 3.2. 2009.

[23]. Gerkey, Brian, Richard T. Vaughan, and Andrew Howard. The player/stage project: Tools for multi-robot and distributed sensor systems. Proceedings of the 11th international conference on advanced robotics. Vol. 1. 2003.

i Надоели баннеры? Вы всегда можете отключить рекламу.