You can edit almost every page by Creating an account. Otherwise, see the FAQ.

Robert J.K. Jacob

From EverybodyWiki Bios & Wiki

Robert J.K. Jacob (also known as Rob Jacob) is currently a professor at Tufts University in the department of computer science. He currently works with implicit brain-computer interfaces and his research focuses are in new interaction models and techniques, and user interface software.[1]

Education & Professional Life[edit]

Robert Jacob received his Ph.D. from John Hopkins University. In the past, Robert served as the Vice-President of ACM SIGCHI, the chairman of the CHI and UIST conferences, and the General Co-Chair of UIST and TEI. Before his tenure at Tufts, he worked in the Human-Computer Interaction Lab at the Naval Research Laboratory.[1]

He was also a professor at the University College London, Universite Paris-Stud, and the MIT Media Laboratory.[1]

In 2007 he was elected as a member of the ACM CHI Academy. In 2016, he became an ACM Fellow.[1]

Works[edit]

Brain-Computer Interfaces[edit]

Jacob contributed “From Brains to Bytes”, a paper which presents different types of brain-computer interfaces. In the paper, he along with his co-contributors, discuss various directions for research into this field as an attempt to grant users direct or passive control of a computer interface by just using brain signals.

The research areas discussed are as follows:

Direct Control Interfaces (DCIs)[edit]

These are interfaces which replace a user’s direct interaction with a computer for example, using a mouse and a keyboard.[2]

Invasive Brain-Computer Interaction[edit]

This interaction involves implanting of micro-electrodes into the grey matter of the brain as an attempt to capture brain activity more accurately.[2]

Non-Invasive Brain-Computer Interaction[edit]

Unlike the invasive BCIs, there is a use of external systems such as electro-encephalography (EEG) or functional near-infared spectroscopy (fNIRS) to capture brain activity more accurately.[2]

Passive Brain-Computer Interaction[edit]

Direct control interfaces, invasive and non-invasive BCIs all use brain activity as the primary input device but require considerable user training. For example, hooking up to an EEG may take considerable effort and time whereas, using a standard mouse or keyboard would be more convenient. This is why passive BCIs have been looked at because they detect brain activity that occurs naturally during performance and they also focus on the brain as a “complementary” (rather than primary) source of information being used with an additional input that is often used with a conventional input device like the mouse or keyboard.[2]

Reality-based Interaction[edit]

In a CHI-paper called "Reality-Based Interaction: A Framework for Post-WIMP Interfaces", Jacob and his team propose a notion which unifies a large subset of emerging interaction styles. Him and his team provide a framework based on this notion. This framework focuses on four real world themes:

Naive Physics[edit]

This is the common sense knowledge that people have about the physical world.[3]

Body Awareness & Skills[edit]

People are always aware of their physical bodies and thus, have control of them.[3]

Environment Awareness & Skills[edit]

The sense of surroundings that people have allows them to negotiate, manipulate, and navigate within this environment.[3]

Social Awareness & Skills[edit]

Awareness of other people in the environment allows for interaction skills.[3]

The framework looks to base interaction on these themes because it may reduce the mental effort required to operate a system because users already possess the skills needed.[3] Robert and his team conducted multiple case studies using various interfaces like the Apple iPhone, Electronic Tourist Guide, etc. where this framework might allow for deeper analysis.

Tangible Programming for Children[edit]

Robert has helped develop Tern, which is a tangible computer language designed to provide intuitive understanding to computer programming for children in educational settings. It is used to create programs for robots like the LEGO Mindstorms RCX and iRobot Create without the use of a mouse or keyboard.[4] Instead, it allows the user to create “physical computer programs” using interlocked system of wooden blocks representing potential actions for a robot. These wooden blocks do not involve any electronics or power supplies instead, Tern uses a webcam to take a picture of the setting and convert it into digital code using the “TopCodes computer vision library”.[4]

Improving Performance of Virtual Reality Applications Through Parallel Processing[edit]

Robert proposed an approach to improving the performance of virtual reality applications by using parallel computing or processing. This approach consists of a model called “Distrubuted Links over Variables evaluation” or “DLoVe” which deals with implementing virtual reality and other “non-WIMP” user interfaces.[5] The approach maps the parallel and continuous structure of these interfaces by combing a data-flow component and an event-based component for discrete interactions. The DLoVe provides the ability for the constraints to be partitioned and executed in parallel across multiple machines which is what enhances performance.[5] This system allows for code designed for a single machine, to be executed in various environments with minimal modifications, and also allows single user programs to be converted into multi-user programs.

Text Entry for Ultra-Small Touchscreens Using a Fixed Cursor and Movable Keyboard[edit]

Robert and his team noted the problem of the difficulty that the touch-based text entry for ultra-small touchscreens like smartwatches due to the relatively “fat finger” a human possesses which hinders the user from selecting elements much smaller than the face of their finger-tips. Robert along with his colleagues introduce a technique known as DriftBoard in which the user is able to select keyboard elements by navigating the movable QWERTY keyboard with respect to a fixed cursor point. They compared this technique with other techniques like the ZoomBoard and SwipeBoard and found that DriftBoard did well against these existing techniques and that it had a promising future with regards to text entry on ultra-small touchscreens.[6]

References[edit]

  1. 1.0 1.1 1.2 1.3 "Rob Jacob Home Page". www.cs.tufts.edu.
  2. 2.0 2.1 2.2 2.3 https://www.cs.tufts.edu/~jacob/papers/crossroads.pdf
  3. 3.0 3.1 3.2 3.3 3.4 https://www.cs.tufts.edu/~jacob/papers/chi08.pdf
  4. 4.0 4.1 "Tern - Tangible Programming". hci.cs.tufts.edu.
  5. 5.0 5.1 http://www.cs.tufts.edu/~jacob/papers/supercomputing.deligiannidis.pdf
  6. http://www.cs.tufts.edu/~jacob/papers/shibata.chi16.pdf

Living people


This article "Robert J.K. Jacob" is from Wikipedia. The list of its authors can be seen in its historical and/or the page Edithistory:Robert J.K. Jacob. Articles copied from Draft Namespace on Wikipedia could be seen on the Draft Namespace of Wikipedia and not main one.