Across Acoustics
Across Acoustics
An Ultrasound-Based Touchscreen
Current touchscreen technology has its limitations. In this episode, we talk with Jérémy Moriot (Université de Sherbrooke) about his team's development of an ultrasound-based system that not only can work with various types of surfaces, but can also detect multiple touches at the same time.
Associated paper: Maxime Bilodeau, Jérémy Moriot, Joëlle Fréchette-Viens Raphaël Bouchard, Philippe Boulais, Nicolas Quaegebeur, and Patrice Masson. "Embedded real-time ultrasound-based multi-touch system." JASA Express Letters 4, 082802 (2024). https://doi.org/10.1121/10.0028323.
Read more from JASA Express Letters.
Learn more about Acoustical Society of America Publications
Music: Min 2019 by minwbu from Pixabay.
Jeremy Moriot 00:06
Welcome to Across Acoustics, the official podcast of the Acoustical Society of America's publications office. On this podcast, we will highlight research from our four publications. I'm your host, Kat Setzer, Editorial Associate for the ASA.
Kat Setzer 00:19
I imagine most of us have used a touchscreen, perhaps even daily, for those of us with smartphones or similar mobile technology. Today, I'm talking with Jeremy Moriotabout an innovative technology that his team created that uses ultrasound. It was discussed in the August JASA Express Letters article, "Embedded real-time, ultrasound-based multi-touch system." Thanks for taking the time to speak with me today, Jeremy, how are you?
Jeremy Moriot 00:47
Hello, Kat. I'm very fine. Thank you.
Kat Setzer 00:50
Awesome. Well, thank you for coming. First, tell us a bit about your research background.
Jeremy Moriot 00:54
Yeah, of course. So I did a PhD in vibroacoustics in France, in the nuclear field. So the goal of this PhD was to develop embedded systems for detecting leaks inside vapor vessel heat exchangers using, like, vibroacoustics phenomena. After that, I moved to Canada and did a two-year postdoc in embedded ultrasound techniques for detection. It's called structural health monitoring. So basically developing algorithms and methods, piezoelectric arrays, pattern and just to improve these kind of detections. And the lab where I was doing my postdoc, the group of acoustics of the University of Sherbrooke in Quebec, Canada, they found that this kind of system, instead of detecting like cracks or fatigue in structure, they could use it to detect touch, because basically it creates like a change of mechanical impedance at the surface. So it creates reflections of vibration waves. So basically, they decided to explore this opportunity.
Kat Setzer 02:05
Okay, that's very cool. So your research has to do with the touchscreens, you're just mentioning. How do touchscreens typically work?
Jeremy Moriot 02:13
Yeah. So today, most of the touchscreens, I would say, like 98% of the touchscreens, they're using capacitive elements. So basically, it's a thin film of capacitive elements there are below the screen. And when you approach your finger, it just disturbs the, like, the pattern of electronic waves. And then with that, you can just detect a touch, and with some embedded algorithm, you can locate also the touch. So it's really, it's really cheap for, let's say, small touchscreens, like smartphones, wearables, like watch. So it's mostly used in the microelectronics.
Jeremy Moriot 03:01
For larger screens, like TVs and larger, the most common technology is infrared technology. So infrared technology, basically you have, like a frame, a rectangular frame, and there is LEDs in the infrared domain. All of these LEDs emits rays that propagates longitudinally. And when you just put your finger somewhere, it just hide some of these rays. And then this is how it's able to locate the touch. For this kind of technology, like how do you say, the optical technology, the problem is that it propagates in like a linear array, right? So it's not usable for, like, curved structures, basically. But this kind of frame, you put it on the screen directly. So if the screen is like on plastic or glass, or even if it's another kind of structure, like a metallic or wood, doesn't matter. I mean, it works well, and the cost is expensive when the screen is small, so it's really, this kind of technology is really adapted for large screens.
Jeremy Moriot 04:19
For capacitive as I said, for small screens, it's really adapted; for larger screen, it costs a lot. And the reason is simple is because the supply chain were developed for small screens, like basically for smartphones. So when you increase the size of the screen, it becomes really expensive. Also this kind of capacitive films, you cannot put it on curved screens. For curved screens, this become challenging and the cost increase rapidly. Also, the capacitive technology is sensitive to everything that affects basically the electromagnetic fields. So it's not adapted for metallic structures. So if you want to make metallic structure, like touch structures, it becomes challenging. It's also affected by the thickness of the structure. So it basically works well for thin structures. Now with projected capacitive the sensitivity is getting better for thick structures, but still, you will still have a limitation on this parameter.
Jeremy Moriot 05:30
So after that, in the past, many people worked on how to adapt acoustics or vibrations to develop new kind of of touch screen or touch structure, because basically, as I said, when you put your finger on a structure, you change the contact impedance, so the mechanical impedance, so you create reflections of the wave that are emitted through the structures. But most of the development, they did not come up with a with a final product. There was this startup, which was called Sensitive Pbject, that they commercialized a system, and after that, they were acquired by another big company, I don't remember the name. But finally, now I don't see any trace on this kind of system, so I assume that the performance was not enough, or maybe the cost was too high.
Kat Setzer 06:30
Okay, okay. That was a vibration-based system you were saying?
Jeremy Moriot 06:34
Yeah, yeah.
Kat Setzer 06:36
Okay. So what is the system you're proposing in this article?
Jeremy Moriot 06:39
So the system we propose in this article uses guided waves, like guided acoustic waves, what we call lamb waves. So basically there is two kind of acoustic vibration-based system. There is passive systems and active systems. So passive systems, you have basically an array of piezoelectric sensors, which are piezoelectric elements that are bonds on the top, like directly on the structure, on the screen, or any other kind of structure, and they record a vibration patterns. So when a user put his finger on the screen, it creates a vibration, like, it's like a vibration source, so it creates waves that propagate up to the piezoelectric elements. And then, based on these waves, there is a embedded algorithm that is able to just locate or detect the presence of a touch, and locate this touch using different kind of algorithm. It can be like beamforming algorithm or like whatever imaging algorithm. But the limitation of the system is that the touch needs to create a sufficiently high source of vibrations to be detected. So for example, when the user just move his finger on the screen, it doesn't generate a lot of vibrations. So this kind of system won't be able to just basically track a touch of the finger, and also for multi-touch tracking, the same problem. So it's really most for detecting and locating like a tap on the screen or on the structure. So it's really limited, but the advantages, like the computing for this kind of system is also not so difficult, so it can be embedded on very cheap microcontrollers.
Jeremy Moriot 08:35
The active system they use also piezoelectric elements, but the piezoelectric elements emits waves through the structure, and other piezoelectric elements will record the reflection of these waves. So we need a system to emit the waves and a system to record the waves. So electronically, it's a bit more complex. But the advantage is that in theory, when you just put your finger on the screen, you disturb the wave patterns, so the sensitivity is increased compared to passive systems. The user don't need to tap on the screen, so it doesn't need to generate acoustic waves. And of course, you're able to track the finger or multiple figure on the screen, so the sensitivity, the performance, is really higher compared to passive systems.
Jeremy Moriot 09:27
The kind of system we have developed, we present in this paper, is an active system. So there is a linear array of, let's say, between two and six piezoelectric elements, small piezoelectric elements. We used a thin disc. It's a three-millimeter or six-millimeter diameter disk, and I think the thickness was like 0.2 millimeter. So it's really like cheap and common piezoelectric elements. Those piezoelectric elements are bounded, or soldered, I would say, on a flexible PCB. So it's form an array. This array, you can bound it directly to a structure. In this paper, the structure is a glass plate of three millimeter thick, and it's a square surface, or 20 centimeter by 20 centimeter, and the array of piezoelectric sensors is connected to an embedded PCB that we have developed. On this embedded PCB, there is a microcontroller. The microcontroller is controlling the emission and reception of ultrasound waves through the chips, and there is a microprocessor. The microprocessor just do the processing of the recorded signals. And there is a GPU, embedded GPU. The embedded GPU is for the processing the imaging algorithm. So just basically to locate the different touch.
Jeremy Moriot 11:03
So I can just enter a little bit into the processing. So there is a sequence of emission that is repeated every 15 milliseconds. So every 15 milliseconds, we emit a chirp, so between, like, I think, was 20 kilohertz to 100 kilohertz. So the details are given in the paper. And then we take a look at if there is some difference in the signal recorded in this range of frequency. If yes, we conclude that there is a touch, at least one touch. And then after that, we transfer the residual signals related to this touch signal to the CPU. So here in the CPU, there is, like, different kind of treatments, like Fourier transforms, filtering. So different kind of treatments, just to, let's say, purify the signal, like filter, some filtration on the signal, and then after that, the signal is transferred to the GPU. In the GPU, we use embedded phase array transform algorithm. It's called FAT. GCC-FAT so it's a generalized cross correlation algorithm. And basically what it does, it just take the signal and do a correlation operation with a baseline of reference signals. So if you have, for example, in this case, it was 2500 pixels on the structure. So we have this amount of signals, of baseline signals, and in one step, you want to correlate the arriving signals with all the signals. So this is why it requires a GPU. And this is, of course, a limitation of the system, because GPUs are very expensive. They consume a lot of energy, and if the structure, the number of the pixels on the structure increase, you will need to increase the processing power, basically. So this is one limitation.
Jeremy Moriot 13:03
After that, what you obtain is a map of correlation coefficients, and the high correlation coefficients corresponds to touch locations. So you can see on this map the various touch. You can identify the amount of touch, the number of touch. We were able to successfully detect five touch the same time. So it works multi-touch. And also you can see the the touch moving on this map. After that you have a map, but you haven't identified the coordinates of the touch, so the map of, like the image of the coefficient creation is transferred back to the CPU, and here we have developed a homemade algorithm to just extract the different coordinates of the touch, and then the coordinates of the touch are sent to the system where it's connected. It can be a laptop or whatever system through we call it, HID-USB protocol. So basically, just plug the system to a laptop and then it's automatically detect as a touch screen. So you can really use it as a trackpad, as a touchscreen.
Jeremy Moriot 14:20
The advantage is that you can have touch location on both sides of the screen. So it's... On smartphone, you have access just to one side, because on the other side there is the sensor that is bounded and you don't want to touch the sensor. But with our system, you can really have access to the both sides of the screen or of the surface. And you can also have access to the edge. So you can imagine, like on the edge of the surface, you can have also functions, the different kind of function you can just program on the surface. So we have successfully did it on a glass plate. And also, like, with different size, because it's a, as I said, at the beginning, it uses guided waves. So the advantage of guided waves is that the attenuation of those waves is not strong as function of the distance. So you can really turn large, very large surface into a touch interface. The limitation of course will be the processing. The larger the surface, you need to sacrifice the density of pixels, so the resolution, spatial resolution, or you need to increase the computing power. In this case, we have developed another kind of system that just do some touch detection and classification. Let's say, is it a tap, a double tap, a swipe?? So just classify the kind of touch. And for this system, you just need a microcontroller, because you don't want to locate the touch. You don't need to have this correlation process. So it's really low-power computing and low cost computing. So you can even turn this kind of large surface into, let's say, a touch device, butwithout having a location necessarily. That is for this answer.
Kat Setzer 16:11
Yeah. What are the next steps of this research?
Jeremy Moriot 16:17
So the next step of this research is good question. So at this time, in 2019, I have created a startup. The name of startup was All Waves Technologies, and basically the startup was incubated into the University of Sherbrooke. So I was, at this time, like a researcher in the team, and at the same time, I was the CTO of the startup, so we were trying to find applications. Of course, it arrived during the pandemic, so at this time, there was different issues. So first of all, the message was, do not touch anything. So we're trying to sell touch device. So this was difficult. And we were also talking with the big players in the industry of touchscreen, or just say, connected objects, and at this time, I believe those industry were mostly... The main concern was how we can just save our industry, basically, because there was different problems on, let's say, cheap supply chain, essentially, because of pandemic in China. So it was not a good time to enter this kind of market, and also our system was very expensive. So we just created an embedded prototype, which was good. Before that, it was a LabVIEW interface. So it was really expensive. We succeeded in embedded all the algorithm in a small device, but the cost was around 100 to 300 USD. So it was, it was very expensive. Just to give you an order in a smartphone, capacitive components like one, between one and 10 dollards, so one order of magnitude higher. So this was a problem, of course.
Jeremy Moriot 18:14
What we tried to sell is you can create innovative structures, like you can turn, for example, let's saydifferent component in a car into touch interface. So we did a YouTube channel which is still online, in which we show like different applications. So for example, you can just basically draw a virtual panel on the glass window of a car. And on this panel you can just draw, like, up and down position. And then when you touch directly the window of the car on up, you just control the window like this. So this, we believe that had some potential, but the cost is still an issue, you know, especially in the automotive sector.
Jeremy Moriot 18:58
So after that, finally, the startup, just like we didn't have enough funds, we didn't find customers at the end. So after three years of development, we just decided to give up. And the patents are on this technology are still owned by the University of Sherbrooke. So if there is some companies interested to relaunch the development of this technology, that would be, that would be great. But unfortunately, the research on this topic didn't continue at the lab. So why we publish this this paper is, was just to let people know what our advancement on this topic, because we believe it was interesting for this, like, say, the academic sector.
Kat Setzer 19:46
Yeah, yeah, it sounds really interesting. I'm sad that it didn't go further, because it sounds like such a good idea.
Jeremy Moriot 19:51
Oh, it's not that sad. It was a good adventure, and it was very exciting to like, let's say, develop a prototype from just, let's say, a lab experiment and to explore various markets. So, no, it was, it was a very exciting adventure.
Kat Setzer 20:10
Well, that's a good way to look at it.
Jeremy Moriot 20:12
Yeah. And you learn a lot, you know, in the in this kind of entrepreneurship adventure.
Kat Setzer 20:17
Yeah, yeah. Well, hopefully somebody is able to pick up where your team left off. Thank you again for taking the time to speak with me today, and I wish you the best of luck in your research that's not related to this. For those of you who are listening, if you enjoyed this episode, please take a moment to text or email it to someone you think may also like it. Thanks.
Jeremy Moriot 20:35
Thank you.
Kat Setzer 20:38
Thank you for tuning into Across aAoustics. If you would like to hear more interviews from our authors about their research, subscribe and find us on your preferred podcast platform.