<div dir="ltr"><div dir="ltr"><div class="gmail_default" style="font-size:small"><div><b>When</b>: Friday, January 13th from <b style="background-color:rgb(255,255,0)">10:30 am - 12:30 pm CT</b><br><br><b>Where</b>: Talk will be given <b><font color="#0000ff">live, in-person</font></b> at<br> TTIC, 6045 S. Kenwood Avenue<br> 5th Floor, <b><u><font color="#000000">Room 530</font></u></b></div><div><br><b>Virtually</b>: attend virtually <b><font color="#0000ff"><a href="https://uchicagogroup.zoom.us/j/92823957638?pwd=aTAzYndmNjVWdS90L1ZacjlmZk1vZz09" target="_blank">here</a></font></b><br></div><div><br><b>Who</b>: <span class="gmail-il">Andrea</span> Daniele<font face="arial, sans-serif">, TTIC</font></div><div><br></div><div><div class="MsoNormal" align="center" style="margin:0in 0in 8pt;font-size:11pt;text-align:center;line-height:15.6933px;font-family:Calibri,sans-serif"><hr size="2" width="100%" align="center"></div><font face="arial, sans-serif"><b>Title: </b>Accessible Interfaces for the Development and Deployment of Robotic Platforms<br><br><b>Abstract: </b>Accessibility is one of the most important features in the design of robots and their interfaces. Accessible interfaces allow untrained users to easily and intuitively tap into the full potential of a robotic platform. This thesis proposes methods that improve the accessibility of robots for three different target audiences: consumers, researchers, and learners.<br><br>In order for humans and robots to work together effectively, they both must be able to communicate with each other to convey information, establish a shared understanding of their collaborative tasks, and to coordinate their efforts. Natural languages offer a flexible, bandwidth-efficient medium that humans can readily use to interact with their robotic companions.</font></div><div><font face="arial, sans-serif"><br>We work on the problem of enabling robots to understand natural language utterances in the context of learning to interact with their environment. In particular, we are interested in enabling robots to operate articulated objects (e.g., fridge, drawer) by leveraging kinematic models. We propose a multimodal learning framework that incorporates both vision and language acquired in situ, where we model linguistic information using a probabilistic graphical model that grounds natural language descriptions to their referent kinematic motion. Our architecture then fuses the two modalities to estimate the structure and parameters that define kinematic models of articulated objects.<br><br>We then turn our focus to the development of accessible and reproducible robotic platforms for scientific research. The majority of robotics research is accessible to all but a limited audience and usually takes place in idealized laboratory settings or unique uncontrolled environments.<br><br>Owing to limited reproducibility, the value of a specific result is either open to interpretation or conditioned on specific environments or setups. In order to address these limitations, we propose a new concept for reproducible robotics research that integrates development and benchmarking, so that reproducibility is obtained “by design” from the beginning of the research and development process. We first provide the overall conceptual objectives to achieve this goal and then a concrete instance that we have built: the DUCKIENet. We validate the system by analyzing the repeatability of experiments conducted using the infrastructure and show that there is low variance across different robot hardware and labs.<br><br>We then propose a framework, called SHARC (SHared Autonomy for Remote Collaboration), to improve accessibility for underwater robotic intervention operations. Conventional underwater robotic manipulation requires a team of scientists on-board a support vessel to instruct the pilots on the operations to perform. This limits the number of scientists that can work together on a single expedition, effectively hindering a robotic platform’s accessibility and driving up costs of operation. On the other hand, shared autonomy allows us to leverage human capabilities in perception and semantic understanding of an unstructured environment, while relying on well-tested robotic capabilities for precise low-level control. SHARC allows multiple remote scientists to efficiently plan and execute high-level sampling procedures using an underwater manipulator while deferring low-level control to the robot. A distributed architecture allows scientists to coordinate, collaborate, and control the robot while being on-shore and at thousands of kilometers away from one another and the robot.<br><br>Lastly, we turn our attention to the impact of accessible platforms in the context of educational robotics. While online learning materials and Massive Open Online Courses (MOOCs) are great tools for educating large audiences, they tend to be completely virtual. Robotics has a hardware component that cannot and must not be ignored or replaced by simulators. This motivated our work in the development of the first hardware-based MOOC in AI and robotics. This course, offered for free on the edX platform, allows learners to study autonomy hands-on by making real robots make their own decisions and accomplish broadly defined tasks. We design a new robotic platform from the ground up to support this new learning experience. A completely browser-based experience, based on leading tools and technologies for code development, testing, validation, and deployment (e.g., ROS, Docker, VSCode), serves to maximize the accessibility of these educational resources.<br></font></div><div><font face="arial, sans-serif"><br></font></div><div><font face="arial, sans-serif"><b>Thesis Advisor</b>: <a href="mailto:mwalter@ttic.edu" target="_blank"><b>Matthew Walter</b></a></font></div><div><br></div><div><br></div></div><div><div dir="ltr" class="gmail_signature"><div dir="ltr"><div><span style="font-family:arial,helvetica,sans-serif;font-size:x-small">Mary C. Marre</span><br></div><div><div><font face="arial, helvetica, sans-serif" size="1">Faculty Administrative Support</font></div><div><i><font face="arial, helvetica, sans-serif" color="#3d85c6" size="1"><b>Toyota Technological Institute</b></font></i></div><div><i><font face="arial, helvetica, sans-serif" color="#3d85c6" size="1">6045 S. Kenwood Avenue, Rm 517</font></i></div><div><font size="1"><i><font face="arial, helvetica, sans-serif" color="#3d85c6">Chicago, IL 60637</font></i><br></font></div><div><font size="1"><i><font face="arial, helvetica, sans-serif" color="#3d85c6">773-834-1757</font></i></font></div><div><b><i><a href="mailto:mmarre@ttic.edu" target="_blank"><font face="arial, helvetica, sans-serif" size="1">mmarre@ttic.edu</font></a></i></b></div></div></div></div></div><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Jan 12, 2023 at 12:50 PM Mary Marre <<a href="mailto:mmarre@ttic.edu">mmarre@ttic.edu</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div style="font-size:small"><div><b>When</b>: Friday, January 13th from <b style="background-color:rgb(255,255,0)">10:30 am - 12:30 pm CT</b><br><br><b>Where</b>: Talk will be given <b><font color="#0000ff">live, in-person</font></b> at<br> TTIC, 6045 S. Kenwood Avenue<br> 5th Floor, <b><u><font color="#000000">Room 530</font></u></b></div><div><br><b>Virtually</b>: attend virtually <b><font color="#0000ff"><a href="https://uchicagogroup.zoom.us/j/92823957638?pwd=aTAzYndmNjVWdS90L1ZacjlmZk1vZz09" target="_blank">here</a></font></b><br></div><div><br><b>Who</b>: Andrea Daniele<font face="arial, sans-serif">, TTIC</font></div><div><br></div><div><div class="MsoNormal" align="center" style="margin:0in 0in 8pt;font-size:11pt;text-align:center;line-height:15.6933px;font-family:Calibri,sans-serif"><hr size="2" width="100%" align="center"></div><font face="arial, sans-serif"><b>Title: </b>Accessible Interfaces for the Development and Deployment of Robotic Platforms<br><br><b>Abstract: </b>Accessibility is one of the most important features in the design of robots and their interfaces. Accessible interfaces allow untrained users to easily and intuitively tap into the full potential of a robotic platform. This <span>thesis</span> proposes methods that improve the accessibility of robots for three different target audiences: consumers, researchers, and learners.<br><br>In order for humans and robots to work together effectively, they both must be able to communicate with each other to convey information, establish a shared understanding of their collaborative tasks, and to coordinate their efforts. Natural languages offer a flexible, bandwidth-efficient medium that humans can readily use to interact with their robotic companions.</font></div><div><font face="arial, sans-serif"><br>We work on the problem of enabling robots to understand natural language utterances in the context of learning to interact with their environment. In particular, we are interested in enabling robots to operate articulated objects (e.g., fridge, drawer) by leveraging kinematic models. We propose a multimodal learning framework that incorporates both vision and language acquired in situ, where we model linguistic information using a probabilistic graphical model that grounds natural language descriptions to their referent kinematic motion. Our architecture then fuses the two modalities to estimate the structure and parameters that define kinematic models of articulated objects.<br><br>We then turn our focus to the development of accessible and reproducible robotic platforms for scientific research. The majority of robotics research is accessible to all but a limited audience and usually takes place in idealized laboratory settings or unique uncontrolled environments.<br><br>Owing to limited reproducibility, the value of a specific result is either open to interpretation or conditioned on specific environments or setups. In order to address these limitations, we propose a new concept for reproducible robotics research that integrates development and benchmarking, so that reproducibility is obtained “by design” from the beginning of the research and development process. We first provide the overall conceptual objectives to achieve this goal and then a concrete instance that we have built: the DUCKIENet. We validate the system by analyzing the repeatability of experiments conducted using the infrastructure and show that there is low variance across different robot hardware and labs.<br><br>We then propose a framework, called SHARC (SHared Autonomy for Remote Collaboration), to improve accessibility for underwater robotic intervention operations. Conventional underwater robotic manipulation requires a team of scientists on-board a support vessel to instruct the pilots on the operations to perform. This limits the number of scientists that can work together on a single expedition, effectively hindering a robotic platform’s accessibility and driving up costs of operation. On the other hand, shared autonomy allows us to leverage human capabilities in perception and semantic understanding of an unstructured environment, while relying on well-tested robotic capabilities for precise low-level control. SHARC allows multiple remote scientists to efficiently plan and execute high-level sampling procedures using an underwater manipulator while deferring low-level control to the robot. A distributed architecture allows scientists to coordinate, collaborate, and control the robot while being on-shore and at thousands of kilometers away from one another and the robot.<br><br>Lastly, we turn our attention to the impact of accessible platforms in the context of educational robotics. While online learning materials and Massive Open Online Courses (MOOCs) are great tools for educating large audiences, they tend to be completely virtual. Robotics has a hardware component that cannot and must not be ignored or replaced by simulators. This motivated our work in the development of the first hardware-based MOOC in AI and robotics. This course, offered for free on the edX platform, allows learners to study autonomy hands-on by making real robots make their own decisions and accomplish broadly defined tasks. We design a new robotic platform from the ground up to support this new learning experience. A completely browser-based experience, based on leading tools and technologies for code development, testing, validation, and deployment (e.g., ROS, Docker, VSCode), serves to maximize the accessibility of these educational resources.<br></font></div><div><font face="arial, sans-serif"><br></font></div><div><font face="arial, sans-serif"><b><span>Thesis</span> Advisor</b>: <a href="mailto:mwalter@ttic.edu" target="_blank"><b>Matthew Walter</b></a></font></div><div><br></div><div><br></div><div><br></div><div><br></div></div><div><div dir="ltr"><div dir="ltr"><div><span style="font-family:arial,helvetica,sans-serif;font-size:x-small">Mary C. Marre</span><br></div><div><div><font face="arial, helvetica, sans-serif" size="1">Faculty Administrative Support</font></div><div><i><font face="arial, helvetica, sans-serif" color="#3d85c6" size="1"><b>Toyota Technological Institute</b></font></i></div><div><i><font face="arial, helvetica, sans-serif" color="#3d85c6" size="1">6045 S. Kenwood Avenue, Rm 517</font></i></div><div><font size="1"><i><font face="arial, helvetica, sans-serif" color="#3d85c6">Chicago, IL 60637</font></i><br></font></div><div><font size="1"><i><font face="arial, helvetica, sans-serif" color="#3d85c6">773-834-1757</font></i></font></div><div><b><i><a href="mailto:mmarre@ttic.edu" target="_blank"><font face="arial, helvetica, sans-serif" size="1">mmarre@ttic.edu</font></a></i></b></div></div></div></div></div><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Jan 5, 2023 at 1:15 PM Mary Marre <<a href="mailto:mmarre@ttic.edu" target="_blank">mmarre@ttic.edu</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div style="font-size:small"><div><b>When</b>: Friday, January 13th from <b style="background-color:rgb(255,255,0)">10:30 am - 12:30 pm CT</b><br><br><b>Where</b>: Talk will be given <b><font color="#0000ff">live, in-person</font></b> at<br> TTIC, 6045 S. Kenwood Avenue<br> 5th Floor, <b><u><font color="#000000">Room 530</font></u></b></div><div><br><b>Virtually</b>: attend virtually <b><font color="#0000ff"><a href="https://uchicagogroup.zoom.us/j/92823957638?pwd=aTAzYndmNjVWdS90L1ZacjlmZk1vZz09" target="_blank">here</a></font></b><br></div><div><br><b>Who</b>: Andrea Daniele<font face="arial, sans-serif">, TTIC</font></div><div><br></div><div><div class="MsoNormal" align="center" style="margin:0in 0in 8pt;font-size:11pt;text-align:center;line-height:15.6933px;font-family:Calibri,sans-serif"><hr size="2" width="100%" align="center"></div><font face="arial, sans-serif"><b>Title: </b>Accessible Interfaces for the Development and Deployment of Robotic Platforms<br><br><b>Abstract: </b>Accessibility is one of the most important features in the design of robots and their interfaces. Accessible interfaces allow untrained users to easily and intuitively tap into the full potential of a robotic platform. This thesis proposes methods that improve the accessibility of robots for three different target audiences: consumers, researchers, and learners.<br><br>In order for humans and robots to work together effectively, they both must be able to communicate with each other to convey information, establish a shared understanding of their collaborative tasks, and to coordinate their efforts. Natural languages offer a flexible, bandwidth-efficient medium that humans can readily use to interact with their robotic companions.</font></div><div><font face="arial, sans-serif"><br>We work on the problem of enabling robots to understand natural language utterances in the context of learning to interact with their environment. In particular, we are interested in enabling robots to operate articulated objects (e.g., fridge, drawer) by leveraging kinematic models. We propose a multimodal learning framework that incorporates both vision and language acquired in situ, where we model linguistic information using a probabilistic graphical model that grounds natural language descriptions to their referent kinematic motion. Our architecture then fuses the two modalities to estimate the structure and parameters that define kinematic models of articulated objects.<br><br>We then turn our focus to the development of accessible and reproducible robotic platforms for scientific research. The majority of robotics research is accessible to all but a limited audience and usually takes place in idealized laboratory settings or unique uncontrolled environments.<br><br>Owing to limited reproducibility, the value of a specific result is either open to interpretation or conditioned on specific environments or setups. In order to address these limitations, we propose a new concept for reproducible robotics research that integrates development and benchmarking, so that reproducibility is obtained “by design” from the beginning of the research and development process. We first provide the overall conceptual objectives to achieve this goal and then a concrete instance that we have built: the DUCKIENet. We validate the system by analyzing the repeatability of experiments conducted using the infrastructure and show that there is low variance across different robot hardware and labs.<br><br>We then propose a framework, called SHARC (SHared Autonomy for Remote Collaboration), to improve accessibility for underwater robotic intervention operations. Conventional underwater robotic manipulation requires a team of scientists on-board a support vessel to instruct the pilots on the operations to perform. This limits the number of scientists that can work together on a single expedition, effectively hindering a robotic platform’s accessibility and driving up costs of operation. On the other hand, shared autonomy allows us to leverage human capabilities in perception and semantic understanding of an unstructured environment, while relying on well-tested robotic capabilities for precise low-level control. SHARC allows multiple remote scientists to efficiently plan and execute high-level sampling procedures using an underwater manipulator while deferring low-level control to the robot. A distributed architecture allows scientists to coordinate, collaborate, and control the robot while being on-shore and at thousands of kilometers away from one another and the robot.<br><br>Lastly, we turn our attention to the impact of accessible platforms in the context of educational robotics. While online learning materials and Massive Open Online Courses (MOOCs) are great tools for educating large audiences, they tend to be completely virtual. Robotics has a hardware component that cannot and must not be ignored or replaced by simulators. This motivated our work in the development of the first hardware-based MOOC in AI and robotics. This course, offered for free on the edX platform, allows learners to study autonomy hands-on by making real robots make their own decisions and accomplish broadly defined tasks. We design a new robotic platform from the ground up to support this new learning experience. A completely browser-based experience, based on leading tools and technologies for code development, testing, validation, and deployment (e.g., ROS, Docker, VSCode), serves to maximize the accessibility of these educational resources.<br></font></div><div><font face="arial, sans-serif"><br></font></div><div><font face="arial, sans-serif"><b>Thesis Advisor</b>: <a href="mailto:mwalter@ttic.edu" target="_blank"><b>Matthew Walter</b></a></font></div><div><br></div><div><br></div><div><br></div></div><div><div dir="ltr"><div dir="ltr"><div><span style="font-family:arial,helvetica,sans-serif;font-size:x-small">Mary C. Marre</span><br></div><div><div><font face="arial, helvetica, sans-serif" size="1">Faculty Administrative Support</font></div><div><i><font face="arial, helvetica, sans-serif" color="#3d85c6" size="1"><b>Toyota Technological Institute</b></font></i></div><div><i><font face="arial, helvetica, sans-serif" color="#3d85c6" size="1">6045 S. Kenwood Avenue, Rm 517</font></i></div><div><font size="1"><i><font face="arial, helvetica, sans-serif" color="#3d85c6">Chicago, IL 60637</font></i><br></font></div><div><font size="1"><i><font face="arial, helvetica, sans-serif" color="#3d85c6">773-834-1757</font></i></font></div><div><b><i><a href="mailto:mmarre@ttic.edu" target="_blank"><font face="arial, helvetica, sans-serif" size="1">mmarre@ttic.edu</font></a></i></b></div></div></div></div></div></div>
</blockquote></div></div>
</blockquote></div></div>