22 Mar 2019

feedPlanet KDE

Effective HMI interaction and safety attention monitoring using eye tracking technology: DeepGlance Quick

Interacting effectively with increasingly widespread and advanced systems is one of the most important challenges of our time. Most modern HMIs are based on mouse, keyboard or touch screen and allow controlling even very complex devices in a simple and intuitive way. However, in certain contexts, the user may be unable to have direct contact with a device, in this case, we are talking about hands-free interactions and often voice commands are used to interact. But controlling a system by voice, however natural, is not effective for all types of operations and in all environments. In fact, every technology has its peculiarities, that's why the HMI design and the UX are the subject of continuous research and aim to offer increasingly effective and natural interaction methods, also thanks to the combined use of more complementary technologies between them.

Eye tracking technology

Eye tracking is an innovative technology that allows you to accurately estimate where a person is looking, instant by instant, through the use of dedicated devices. Today this technology is spreading more and more thanks to the reduction of costs, the miniaturization of the devices and the high degree of reliability achieved.

Knowing the direction of the gaze allows a system to understand in an immediate way which element we are interested in and to which we want to issue commands, avoiding having to move a cursor or touch a surface. Knowing where we look, a system can also make a page or image scroll automatically, always offering the best frame with respect to the point of interest. The gaze is also related to our attention and attention monitoring is essential when we carry out particularly critical operations.

DeepGlance Quick

image1

The DeepGlance team has over 15 years of experience with this technology and its applications in the main markets, including medical, healthcare, automotive, retail and entertainment.

The current barrier to integrating this technology is that it requires an important initial effort related to specific knowledge of how eye movements work, device management and how to transform low-level data returned by an eye tracker into high-level behaviors necessary to control a system.

This leads to DeepGlance Quick, a QML extension plugin that encapsulates the complexity linked to the use of eye tracking technology and that allows anyone to integrate it immediately into their own Qt application. In fact, through the use of the plugin within Qt Design Studio it is possible in a few minutes to create and test gaze-controlled applications or add eye tracking functionality to existing Qt projects.

DeepGlance Quick was presented for the first time at the SPS IPC Drive Nuremberg 2018, the most important European exhibition for industrial automation, gathering great enthusiasm among the insiders.

image2

How to get started using Qt Design Studio

Before starting, make sure you have installed Qt Design Studio 1.0.0.

An eye tracking device is required to control the application, but If you do not have one, you can use the eye tracker simulator provided by the plugin.

As a first step, download the DeepGlance Quick package.

The Tobii Stream Engine library is a dependency and it is distributed in the plugin package. You must be sure that the Qt Design Studio and your application can find it by copying it to a folder in your system path.

The following example uses an EyeArea in a Rectangle that changes the Rectangle color to red when observed for a certain amount of time:

image3

To get started, create a new Qt Design Studio project named HelloWorld and copy the DgQuick plugin folder in the "imports" directory.

Modify HelloWorld.qml adding the plugin types:

import QtQuick
import DgQuick
import HelloWorld

Item {
  width: Constants.width
  height: Constants.height

  Screen01 {
  }

  EyeTracker {
      model: EyeTracker.ModelTobii
      Component.onCompleted: start()
  }

  ErrorHandler {
      onError: {
          if (error === ErrorHandler.ErrorEyeTrackerSoftwareNotInstalled) {
              console.log("Tobii software not installed");
          } else if (error === ErrorHandler.ErrorEyeTrackerNotConnected) {
              console.log("Tobii eye tracker not connected");
          }
      }
  }

  EventHandler {
      anchors.fill: parent
  }
}

EyeTracker type is used to control the eye tracker device, ErrorHandler to handle the errors and the EventHandler to forward the mouse and keyboard events to the plugin.

To use the simulator just set the EyeTracker model property to EyeTracker.ModelSimulator.

Add EyeButton.qml:

import QtQuick 2.10
Import DgQuick 1.0

Rectangle {
   width: 100
   height: 100
   color: "green"
   EyeArea {
       anchors.fill: parent
       onAction: { parent.color = 'red' }
   }
}

The EyeArea is an area that is gaze-sensitive, it is is the main component of the DeepGlance Quick plugin.

Finally, modify the Screen01.ui.qml adding the EyeButton:

import QtQuick 2.10
import HelloWorld 1.0

Rectangle {
  width: Constants.width
  height: Constants.height

  EyeButton {
      anchors.centerIn: parent
  }
}

For more information and details please refer to the official documentation.

Hands-on experience

Eye tracking is one of those technologies whose potential is fully understood only by experiencing it firsthand.

For this reason, the DeepGlance Plugin was used to develop a demonstrator that collects the main interactions which may be carried out using eye tracking technology.

Take a look at the tutorial video.

You can download the demonstrator at the link below:

http://www.deepglance.com/deepglance-explorer-download
image4

Interaction and Experience design

Technology is not enough to create natural interfaces and valuable user experience, it is necessary to create a design focused on the engagement and involvement of users. For that reason, DeepGlance has started a collaboration with the Interaction & Experience Design Research Lab of the Polytechnic University of Milan, whose design department is ranked 6th in the world. Recently, during the workshop "New paradigms for HC interaction and eye tracking driven interfaces" realized in collaboration with Qt, the students of Digital and Interaction Design have designed solutions that integrate eye tracking technology. And thanks to the integration of DeepGlance Quick in Qt Design Studio they quickly developed working prototypes of the projects.

image5

image6

Use cases

use-cases

There are many applications where eye tracking technology can add value.

In the medical field, there is often the need to consult images, information or check systems in contexts where hands cannot be used. This is what happens for example for the operations of robotic surgery in which the surgeon has his hands engaged with the manipulators for the control of the robotic arms. In this case, the gaze is used to make adjustments, decide which robotic arm to associate with the right or left hand, automatically move the robotic endoscope camera based on the point of interest by providing always the best view and finally to make sure that the surgeon's attention is focused while he is controlling the system. In robotic surgery, the gaze is often used as a method to select and a single physical button mounted on the manipulator is used to confirm an action.

Eye tracking technology is widespread in the healthcare field to allow communication to patients in different clinical conditions such as Cerebral Palsy, ALS, Autism, Spinal Cord Injury. It is used to create personal communication and entertainment systems in a home or clinical / hospital environment controlled exclusively with the eyes and that allows the patient to write and vocalize messages, send emails, browse books, surf the Internet, control television, home automation systems and more.

In the field of retail, events, exhibitions and interactive museums entertainment, gamification and shopping experience systems can be developed to offer the visitor an engaging experience, contrast the "showrooming" effect, increase the time spent in the store / stand and ennobling the image of the brand with a "cool" and innovative experience.

For the creation of information points for hotels, airports, shopping malls, real estate agencies, shop windows to present content adapted to the user's interest, profile prospects in relation to the content consumed and use information in a confidential manner, that is without making no gesture or touch the display.

As an alternative method for interacting with vending machines for drinks, snacks, tickets, orders to avoid having to use hands exclusively to interact, for reasons of hygiene, comfort and to offer an innovative experience to the consumer.

In the marketing field to effectively present promotions and offers, increase the exposure of the prospect to the message that the brand wants to convey, reach an audience that is not currently interested in the message or product, and open a quality two-way communication channel with it also through cross-media marketing.

In the control rooms, it is possible to improve the efficiency of the operators through a multimodal interaction, that is by combining eye tracking with traditional inputs to show information in an adaptive and contextualized way and interact more quickly and naturally.

In the automotive sector, it is possible to monitor the attention of the driver detecting any distractions and allow to interact with the instrumentation on board in a more effective and safe way. For example, understanding the target of a voice command or automatically selecting the element or device with which to interact with the joystick on the steering wheel.

In short

DeepGlance Quick is a plugin that allows you to integrate and test eye tracking technology in your Qt application in an immediate and effortless way, transforming a traditional interface, into a gaze-controllable interface. Eye tracking can be used as an exclusive input method or in combination with other technologies, to make the interaction more effective and natural, taking the most of each of them in a complementary way. On the machine side, it can be used to monitor the attention of an operator and adapt the information shown in an interface based on the elements of interest to the user.

More info:

http://www.deepglance.com

info@deepglance.com

The post Effective HMI interaction and safety attention monitoring using eye tracking technology: DeepGlance Quick appeared first on Qt Blog.

22 Mar 2019 11:06am GMT

21 Mar 2019

feedPlanet KDE

No Deal Brexit

No deal Brexit will mean shutting off most of the supply capacity from the EU to Great Britain, as the government says this will be chaotic. Many of the effects are unknown but in the days and weeks that follow food supplies and medicine supplies will start to fail. The rules on moving money about and even making a phone call will be largely undefined. International travel will get unknown new bureaucracies. EU and WTO law means there also needs to be a hard border in Ireland again, restarting terrorist warfare. Inflation will kick in, unemployment will sky rocket and people will die.

Although the UK government has dropped the dangerous saying of "no deal is better than a bad deal" it is astonishing they were allowed to get away with saying that for so long without challenge. There are still many members of the UK government who are perfectly happy with a chaotic no deal Brexit and the Prime Minister, unwilling to change any tactics, is using more and more Populist language to say how everyone should support her and threaten the whole UK society in the greatest game of chicken since the cold war. It would be trivial to revoke the Article 50 process but unless that is chosen a no deal Brexit will happen.

The political process is broken and has been for many years on this topic, there is no campaign from the normal groups I would expect to have one that I can join. The SNP, Greens and Quakers are not doing what they would usually do and enabling their members to have a voice. Religions in general exist to look after their members in times of crisis but so far nobody in Quakers that I've spoken to has any interest in many any practical mitigation steps.

Most people in Britain still think it'll never happen as the politicians will see sense and back down, but they are wrong because the politicians are not acting rationally they are acting very irrationally and all it takes for no deal Brexit to happen is for no other decision to be taken.

So I find myself waving an European flag in Edinburgh each evening for the People's Vote campaign, a London based campaign with a load of problems but the only one going. I'll go to London this weekend to take part in the giant protest there.

Please come along if you live in the UK. Please also sign the petition to revoke article 50. Wish us luck.

Facebooktwitterlinkedinby feather

21 Mar 2019 2:43pm GMT

foss-north 2019: Community Day


I don't dare to count the days until foss-north 2019, but it is very soon. One of the changes to this year is that we expand the conference with an additional community day.

The idea with the community day here is that we arrange for conference rooms all across town and invite open source projects to use them for workshops, install fests, hackathons, dev sprints or whatever else they see fit. It is basically a day of mini-conferences spread out across town.

The community day is on April 7, the day before the conference days, and is free of charge.

This part of the arrangements has actually been one of the most interesting ones, as it involves a lot of coordination. I'd like to start by thanking all our room hosts. Without them, the day would not be possible!

The other half of the puzzle is our projects. I am very happy to see such a large group of projects willing to try this out for the first time, and I hope for lots and lots of visitors so that they will want to come back in the future as well.

The location of each project, as well as the contents of each room can be found on the community day page. Even though the day is free of charge, some of the rooms want you to pre-register as the seats might be limited, or they want to know if they expect five or fifty visitors. I would also love for you to register at our community day meetup, just to give me an indication of the number of participants.

Also - don't forget to get your tickets for the conference days - and combine this with a training. We're already past the visitor count of the 2018 event, so we will most likely be sold out this year!

21 Mar 2019 7:28am GMT

feedplanet.freedesktop.org

Peter Hutterer: Using hexdump to print binary protocols

I had to work on an image yesterday where I couldn't install anything and the amount of pre-installed tools was quite limited. And I needed to debug an input device, usually done with libinput record. So eventually I found that hexdump supports formatting of the input bytes but it took me a while to figure out the right combination. The various resources online only got me partway there. So here's an explanation which should get you to your results quickly.

By default, hexdump prints identical input lines as a single line with an asterisk ('*'). To avoid this, use the -v flag as in the examples below.

hexdump's format string is single-quote-enclosed string that contains the count, element size and double-quote-enclosed printf-like format string. So a simple example is this:


$ hexdump -v -e '1/2 "%d\n"'
-11643
23698
0
0
-5013
6
0
0

This prints 1 element ('iteration') of 2 bytes as integer, followed by a linebreak. Or in other words: it takes two bytes, converts it to int and prints it. If you want to print the same input value in multiple formats, use multiple -e invocations.


$ hexdump -v -e '1/2 "%d "' -e '1/2 "%x\n"'
-11568 d2d0
23698 5c92
0 0
0 0
6355 18d3
1 1
0 0

This prints the same 2-byte input value, once as decimal signed integer, once as lowercase hex. If we have multiple identical things to print, we can do this:


$ hexdump -v -e '2/2 "%6d "' -e '" hex:"' -e '4/1 " %x"' -e '"\n"'
-10922 23698 hex: 56 d5 92 5c
0 0 hex: 0 0 0 0
14879 1 hex: 1f 3a 1 0
0 0 hex: 0 0 0 0
0 0 hex: 0 0 0 0
0 0 hex: 0 0 0 0

Which prints two elements, each size 2 as integers, then the same elements as four 1-byte hex values, followed by a linebreak. %6d is a standard printf instruction and documented in the manual.

Let's go and print our protocol. The struct representing the protocol is this one:


struct input_event {
#if (__BITS_PER_LONG != 32 || !defined(__USE_TIME_BITS64)) && !defined(__KERNEL__)
struct timeval time;
#define input_event_sec time.tv_sec
#define input_event_usec time.tv_usec
#else
__kernel_ulong_t __sec;
#if defined(__sparc__) && defined(__arch64__)
unsigned int __usec;
#else
__kernel_ulong_t __usec;
#endif
#define input_event_sec __sec
#define input_event_usec __usec
#endif
__u16 type;
__u16 code;
__s32 value;
};

So we have two longs for sec and usec, two shorts for type and code and one signed 32-bit int. Let's print it:


$ hexdump -v -e '"E: " 1/8 "%u." 1/8 "%06u" 2/2 " %04x" 1/4 "%5d\n"' /dev/input/event22
E: 1553127085.097503 0002 0000 1
E: 1553127085.097503 0002 0001 -1
E: 1553127085.097503 0000 0000 0
E: 1553127085.097542 0002 0001 -2
E: 1553127085.097542 0000 0000 0
E: 1553127085.108741 0002 0001 -4
E: 1553127085.108741 0000 0000 0
E: 1553127085.118211 0002 0000 2
E: 1553127085.118211 0002 0001 -10
E: 1553127085.118211 0000 0000 0
E: 1553127085.128245 0002 0000 1

And voila, we have our structs printed in the same format evemu-record prints out. So with nothing but hexdump, I can generate output I can then parse with my existing scripts on another box.

21 Mar 2019 12:30am GMT

15 Mar 2019

feedplanet.freedesktop.org

Peter Hutterer: libinput's internal building blocks

Ho ho ho, let's write libinput. No, of course I'm not serious, because no-one in their right mind would utter "ho ho ho" without a sufficient backdrop of reindeers to keep them sane. So what this post is instead is me writing a nonworking fake libinput in Python, for the sole purpose of explaining roughly how libinput's architecture looks like. It'll be to the libinput what a Duplo car is to a Maserati. Four wheels and something to entertain the kids with but the queue outside the nightclub won't be impressed.

The target audience are those that need to hack on libinput and where the balance of understanding vs total confusion is still shifted towards the latter. So in order to make it easier to associate various bits, here's a description of the main building blocks.

libinput uses something resembling OOP except that in C you can't have nice things unless what you want is a buffer overflow\n\80xb1001af81a2b1101. Instead, we use opaque structs, each with accessor methods and an unhealthy amount of verbosity. Because Python does have classes, those structs are represented as classes below. This all won't be actual working Python code, I'm just using the syntax.

Let's get started. First of all, let's create our library interface.


class Libinput:
@classmethod
def path_create_context(cls):
return _LibinputPathContext()

@classmethod
def udev_create_context(cls):
return _LibinputUdevContext()

# dispatch() means: read from all our internal fds and
# call the dispatch method on anything that has changed
def dispatch(self):
for fd in self.epoll_fd.get_changed_fds():
self.handlers[fd].dispatch()

# return whatever the next event is
def get_event(self):
return self._events.pop(0)

# the various _notify functions are internal API
# to pass things up to the context
def _notify_device_added(self, device):
self._events.append(LibinputEventDevice(device))
self._devices.append(device)

def _notify_device_removed(self, device):
self._events.append(LibinputEventDevice(device))
self._devices.remove(device)

def _notify_pointer_motion(self, x, y):
self._events.append(LibinputEventPointer(x, y))



class _LibinputPathContext(Libinput):
def add_device(self, device_node):
device = LibinputDevice(device_node)
self._notify_device_added(device)

def remove_device(self, device_node):
self._notify_device_removed(device)


class _LibinputUdevContext(Libinput):
def __init__(self):
self.udev = udev.context()

def udev_assign_seat(self, seat_id):
self.seat_id = seat.id

for udev_device in self.udev.devices():
device = LibinputDevice(udev_device.device_node)
self._notify_device_added(device)


We have two different modes of initialisation, udev and path. The udev interface is used by Wayland compositors and adds all devices on the given udev seat. The path interface is used by the X.Org driver and adds only one specific device at a time. Both interfaces have the dispatch() and get_events() methods which is how every caller gets events out of libinput.

In both cases we create a libinput device from the data and create an event about the new device that bubbles up into the event queue.

But what really are events? Are they real or just a fidget spinner of our imagination? Well, they're just another object in libinput.


class LibinputEvent:
@property
def type(self):
return self._type

@property
def context(self):
return self._libinput

@property
def device(self):
return self._device

def get_pointer_event(self):
if instanceof(self, LibinputEventPointer):
return self # This makes more sense in C where it's a typecast
return None

def get_keyboard_event(self):
if instanceof(self, LibinputEventKeyboard):
return self # This makes more sense in C where it's a typecast
return None


class LibinputEventPointer(LibinputEvent):
@property
def time(self)
return self._time/1000

@property
def time_usec(self)
return self._time

@property
def dx(self)
return self._dx

@property
def absolute_x(self):
return self._x * self._x_units_per_mm

@property
def absolute_x_transformed(self, width):
return self._x * width/ self._x_max_value

You get the gist. Each event is actually an event of a subtype with a few common shared fields and a bunch of type-specific ones. The events often contain some internal value that is calculated on request. For example, the API for the absolute x/y values returns mm, but we store the value in device units instead and convert to mm on request.

So, what's a device then? Well, just another I-cant-believe-this-is-not-a-class with relatively few surprises:


class LibinputDevice:
class Capability(Enum):
CAP_KEYBOARD = 0
CAP_POINTER = 1
CAP_TOUCH = 2
...

def __init__(self, device_node):
pass # no-one instantiates this directly

@property
def name(self):
return self._name

@property
def context(self):
return self._libinput_context

@property
def udev_device(self):
return self._udev_device

@property
def has_capability(self, cap):
return cap in self._capabilities

...

Now we have most of the frontend API in place and you start to see a pattern. This is how all of libinput's API works, you get some opaque read-only objects with a few getters and accessor functions.

Now let's figure out how to work on the backend. For that, we need something that handles events:


class EvdevDevice(LibinputDevice):
def __init__(self, device_node):
fd = open(device_node)
super().context.add_fd_to_epoll(fd, self.dispatch)
self.initialize_quirks()

def has_quirk(self, quirk):
return quirk in self.quirks

def dispatch(self):
while True:
data = fd.read(input_event_byte_count)
if not data:
break

self.interface.dispatch_one_event(data)

def _configure(self):
# some devices are adjusted for quirks before we
# do anything with them
if self.has_quirk(SOME_QUIRK_NAME):
self.libevdev.disable(libevdev.EV_KEY.BTN_TOUCH)


if 'ID_INPUT_TOUCHPAD' in self.udev_device.properties:
self.interface = EvdevTouchpad()
elif 'ID_INPUT_SWITCH' in self.udev_device.properties:
self.interface = EvdevSwitch()
...
else:
self.interface = EvdevFalback()


class EvdevInterface:
def dispatch_one_event(self, event):
pass

class EvdevTouchpad(EvdevInterface):
def dispatch_one_event(self, event):
...

class EvdevTablet(EvdevInterface):
def dispatch_one_event(self, event):
...


class EvdevSwitch(EvdevInterface):
def dispatch_one_event(self, event):
...

class EvdevFallback(EvdevInterface):
def dispatch_one_event(self, event):
...

Our evdev device is actually a subclass (well, C, *handwave*) of the public device and its main function is "read things off the device node". And it passes that on to a magical interface. Other than that, it's a collection of generic functions that apply to all devices. The interfaces is where most of the real work is done.

The interface is decided on by the udev type and is where the device-specifics happen. The touchpad interface deals with touchpads, the tablet and switch interface with those devices and the fallback interface is that for mice, keyboards and touch devices (i.e. the simple devices).

Each interface has very device-specific event processing and can be compared to the Xorg synaptics vs wacom vs evdev drivers. If you are fixing a touchpad bug, chances are you only need to care about the touchpad interface.

The device quirks used above are another simple block:


class Quirks:
def __init__(self):
self.read_all_ini_files_from_directory('$PREFIX/share/libinput')

def has_quirk(device, quirk):
for file in self.quirks:
if quirk.has_match(device.name) or
quirk.has_match(device.usbid) or
quirk.has_match(device.dmi):
return True
return False

def get_quirk_value(device, quirk):
if not self.has_quirk(device, quirk):
return None

quirk = self.lookup_quirk(device, quirk)
if quirk.type == "boolean":
return bool(quirk.value)
if quirk.type == "string":
return str(quirk.value)
...

A system that reads a bunch of .ini files, caches them and returns their value on demand. Those quirks are then used to adjust device behaviour at runtime.

The next building block is the "filter" code, which is the word we use for pointer acceleration. Here too we have a two-layer abstraction with an interface.


class Filter:
def dispatch(self, x, y):
# converts device-unit x/y into normalized units
return self.interface.dispatch(x, y)

# the 'accel speed' configuration value
def set_speed(self, speed):
return self.interface.set_speed(speed)

# the 'accel speed' configuration value
def get_speed(self):
return self.speed

...


class FilterInterface:
def dispatch(self, x, y):
pass

class FilterInterfaceTouchpad:
def dispatch(self, x, y):
...

class FilterInterfaceTrackpoint:
def dispatch(self, x, y):
...

class FilterInterfaceMouse:
def dispatch(self, x, y):
self.history.push((x, y))
v = self.calculate_velocity()
f = self.calculate_factor(v)
return (x * f, y * f)

def calculate_velocity(self)
for delta in self.history:
total += delta
velocity = total/timestamp # as illustration only

def calculate_factor(self, v):
# this is where the interesting bit happens,
# let's assume we have some magic function
f = v * 1234/5678
return f

So libinput calls filter_dispatch on whatever filter is configured and passes the result on to the caller. The setup of those filters is handled in the respective evdev interface, similar to this:


class EvdevFallback:
...
def init_accel(self):
if self.udev_type == 'ID_INPUT_TRACKPOINT':
self.filter = FilterInterfaceTrackpoint()
elif self.udev_type == 'ID_INPUT_TOUCHPAD':
self.filter = FilterInterfaceTouchpad()
...

The advantage of this system is twofold. First, the main libinput code only needs one place where we really care about which acceleration method we have. And second, the acceleration code can be compiled separately for analysis and to generate pretty graphs. See the pointer acceleration docs. Oh, and it also allows us to easily have per-device pointer acceleration methods.

Finally, we have one more building block - configuration options. They're a bit different in that they're all similar-ish but only to make switching from one to the next a bit easier.


class DeviceConfigTap:
def set_enabled(self, enabled):
self._enabled = enabled

def get_enabled(self):
return self._enabled

def get_default(self):
return False

class DeviceConfigCalibration:
def set_matrix(self, matrix):
self._matrix = matrix

def get_matrix(self):
return self._matrix

def get_default(self):
return [1, 0, 0, 0, 1, 0, 0, 0, 1]

And then the devices that need one of those slot them into the right pointer in their structs:


class EvdevFallback:
...
def init_calibration(self):
self.config_calibration = DeviceConfigCalibration()
...

def handle_touch(self, x, y):
if self.config_calibration is not None:
matrix = self.config_calibration.get_matrix

x, y = matrix.multiply(x, y)
self.context._notify_pointer_abs(x, y)

And that's basically it, those are the building blocks libinput has. The rest is detail. Lots of it, but if you understand the architecture outline above, you're most of the way there in diving into the details.

15 Mar 2019 6:15am GMT

Peter Hutterer: libinput and location-based touch arbitration

One of the features in the soon-to-be-released libinput 1.13 is location-based touch arbitration. Touch arbitration is the process of discarding touch input on a tablet device while a pen is in proximity. Historically, this was provided by the kernel wacom driver but libinput has had userspace touch arbitration for quite a while now, allowing for touch arbitration where the tablet and the touchscreen part are handled by different kernel drivers.

Basic touch arbitratin is relatively simple: when a pen goes into proximity, all touches are ignored. When the pen goes out of proximity, new touches are handled again. There are some extra details (esp. where the kernel handles arbitration too) but let's ignore those for now.

With libinput 1.13 and in preparation for the Dell Canvas Dial Totem, the touch arbitration can now be limited to a portion of the screen only. On the totem (future patches, not yet merged) that portion is a square slightly larger than the tool itself. On normal tablets, that portion is a rectangle, sized so that it should encompass the users's hand and area around the pen, but not much more. This enables users to use both the pen and touch input at the same time, providing for bimanual interaction (where the GUI itself supports it of course). We use the tilt information of the pen (where available) to guess where the user's hand will be to adjust the rectangle position.

There are some heuristics involved and I'm not sure we got all of them right so I encourage you to give it a try and file an issue where it doesn't behave as expected.

15 Mar 2019 5:58am GMT