{"id":758,"date":"2011-04-19T04:13:58","date_gmt":"2011-04-19T08:13:58","guid":{"rendered":"http:\/\/www.briancbecker.com\/blog\/?p=758"},"modified":"2020-04-12T19:45:33","modified_gmt":"2020-04-12T23:45:33","slug":"thesis-proposal","status":"publish","type":"post","link":"http:\/\/www.briancbecker.com\/blog\/2011\/thesis-proposal\/","title":{"rendered":"Thesis Proposal"},"content":{"rendered":"<p>Four years of grinding work in graduate school, done with classes, put out some conference papers, published a journal paper, and people keep asking when I&#8217;m going to be done. It must be that time in the PhD program to propose a thesis. Next Monday I&#8217;m giving my oral proposal, but I just mailed the thesis document to my thesis committee members and the Robotics Institute in general. The details are:<\/p>\n<p><strong>Vision-Based Control of a Handheld Micromanipulator for Robot-Assisted Retinal Surgery<\/strong><\/p>\n<p><strong><em>Abstract &#8211; <\/em><\/strong>Surgeons increasingly need to perform complex operations on extremely  small anatomy. Many promising new surgeries are effective, but difficult  or impossible to perform because humans lack the extraordinary control  required at sub-mm scales. Using micromanipulators, surgeons gain better  positioning accuracy and additional dexterity as the instrument  smoothes tremor and scales hand motions. While these aids are  advantageous, they do not actively consider the goals or intentions of  the operator and thus cannot provide context-specific behaviors, such as  motion scaling around anatomical targets, prevention of unwanted  contact with pre-defined tissue areas, and other helpful task-dependent  actions.<\/p>\n<p>This thesis explores the fusion of visual information with  micromanipulator control and builds a framework of task-specific  behaviors that respond synergistically with surgeon\u2019s intentions and  motions throughout surgical procedures. By exploiting real-time  microscope view observations, a-priori knowledge of surgical procedures,  and pre-operative data used by the surgeon while preparing for the  surgery, we hypothesize that the micromanipulator can better understand  the goals of a given procedure and deploy individualized aids in  addition to tremor suppression to further help the surgeon.  Specifically, we propose a vision-based control framework of modular  virtual fixtures for handheld micromanipulator robots. Virtual fixtures  include constraints such as \u201cmaintain tip position\u201d, \u201cavoid these  areas\u201d, \u201cfollow a trajectory\u201d, and \u201ckeep an orientation\u201d whose  parameters are derived from visual information, either pre-operatively  or in real-time, and are enforced by the control system. Combining  individual modules allows for complex task-specific behaviors that  monitor the surgeon\u2019s actions relative to the anatomy and react  appropriately to cooperatively accomplish the surgical procedure.<\/p>\n<p>Particular focus is given to vitreoretinal surgery as a testbed for  vision-based control because several new and promising surgical  techniques in the eye depend on fine manipulations of delicate retinal  structures. Preliminary experiments with Micron, the micromanipulator  developed in our lab, demonstrate that vision-based control can improve  accuracy and increase usability for difficult retinal operations, such  as laser photocoagulation and vessel cannulation. An initial framework  for virtual fixtures has been developed and shown to significantly  reduce error in synthetic tests if the structure of the surgeon\u2019s  motions is known. Proposed work includes formalizing the virtual  fixtures framework, incorporating elements from model predictive  control, improving 3D vision imaging of retinal structures, and  conducting experiments with an experienced retinal surgeon. Results from  experiments with <em>ex vivo <\/em>and <em>in vivo <\/em>tissue for selected retinal surgical procedures will validate our approach.<\/p>\n<p>Thesis Committee Members:<br \/>\nCameron N. Riviere, Chair<br \/>\nGeorge A. Kantor<br \/>\nGeorge D. Stetten<br \/>\nGregory D. Hager, Johns Hopkins University<\/p>\n<p>A copy of the thesis proposal document is available at:<br \/>\n<a href=\"http:\/\/briancbecker.com\/thesis\/becker_proposal.pdf\" target=\"_blank\" rel=\"noopener noreferrer\">http:\/\/briancbecker.com\/thesis\/becker_proposal.pdf<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Four years of grinding work in graduate school, done with classes, put out some conference papers, published a journal paper,&hellip;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-758","post","type-post","status-publish","format-standard","hentry","category-uncategorized"],"_links":{"self":[{"href":"http:\/\/www.briancbecker.com\/blog\/wp-json\/wp\/v2\/posts\/758","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/www.briancbecker.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/www.briancbecker.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/www.briancbecker.com\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/www.briancbecker.com\/blog\/wp-json\/wp\/v2\/comments?post=758"}],"version-history":[{"count":9,"href":"http:\/\/www.briancbecker.com\/blog\/wp-json\/wp\/v2\/posts\/758\/revisions"}],"predecessor-version":[{"id":1307,"href":"http:\/\/www.briancbecker.com\/blog\/wp-json\/wp\/v2\/posts\/758\/revisions\/1307"}],"wp:attachment":[{"href":"http:\/\/www.briancbecker.com\/blog\/wp-json\/wp\/v2\/media?parent=758"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/www.briancbecker.com\/blog\/wp-json\/wp\/v2\/categories?post=758"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/www.briancbecker.com\/blog\/wp-json\/wp\/v2\/tags?post=758"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}