13:31:15 Let me start recording. sure. Okay, I think we're ready to go i'm delighted. 13:31:36 Today that our third guest lecture julie Schwartz is able to join us and talk about the amazing work she's been doing with new interaction techniques for the whole web. 13:31:50 Awesome. Okay, I guess i'll take it over and great to see see you again. 13:31:57 Yeah, thank you. Yeah. hi everyone it's. great to be here. Thank you to Brad for inviting me. had a lot of fun making this talk and looking forward to talking about it. 13:32:10 Brad mentioned that some of the students wanted a class. on interaction techniques in Ar and Vr: And so yeah, he reached out to me to see if I could give a talk about this. and I think that i'm a great person to 13:32:23 talk about interaction techniques and error and beer because that's my job, and especially about giving behind the scenes perspective on how to go about the actually shipping advanced interaction techniques in industry setting I wanted to let everyone know that 13:32:41 I have the chat open. I can see your chats, and so feel free to interrupt me at any time. 13:32:48 If you have questions, or just just raise your hand because I want this to be a discussion, and I will try to leave some time at the end. 13:32:59 In, but I have a lot of slides to get to so let's get going alright. 13:33:06 There is another class in here right afterward, so we have to end pretty proper. 13:33:09 Yes, okay, So yeah, 45 like basically and i'll try to keep it to an hour. 13:33:15 So yeah, please feel free to interrupt so at any time just in case we don't have type of questions, but I think I'm going to try and best so quickly a little bit about myself. 13:33:25 I'm a principal software engineer at Microsoft, and I've spent my whole time here, which has been, I guess, 7 years now, working on ar and Vr interaction techniques. and I was not that long ago. 13:33:39 A student just like you. I can remember it very well as a PHD. 13:33:44 Student at Cmu, and I worked on a lot of interaction techniques with Scott Hudson, with my advisor, and Jen Mankoff as well. 13:33:53 Also Chris Harrison, who, I think, is a professor at at Tmu. 13:33:58 Now, and and in fact, we did a startup together, called Tiko. 13:34:04 That was kind of the first time shipping and and shipping an interaction technique in a real product. 13:34:09 I think it's. on. like most huawei phones like 300,000,000 phones, or I don't know how many a lot of phones. 13:34:15 So yeah, I did. a PHD. from 29, 2,009, 2,014. 13:34:19 Then kind of did the Kiko startup while doing my PHD. 13:34:26 Which was a first at Cmu, and then I kind of wanted to do something that had a little broader impact than just a phone interaction. 13:34:35 So joined Microsoft as a software engineer researcher on the shell team to work on the holans, and 7 years later I'm: still on the shell team working on hololens interaction techniques Yeah, I've done a lot 13:34:50 in those 7 years to explain what the shell team is it's the team that works on the Core Ui of the operating system, which means the anything that shows up when you launch the hololens that's called the shell 13:35:04 it's like the desktop gui and then that kind of means that we get to define all of the behaviors of like buttons. 13:35:12 Anything that's moveable. So it's pretty much it's pretty much. yeah interaction techniques is exactly what I do for a and viewer. 13:35:23 So yeah, I work joined the holland's team in 2,015, and then from about 20 fifth for about 3 years I did a combination of writing the code of the shell, and for like the hololens one as well as we 13:35:40 shipped a Vr in windows. and then on the side, prototyping interaction techniques. 13:35:49 Because I had this vision of wanting to touch holograms, and I decided the best way to do that was to build prototypes and show by example to people instead of trying to convince people via powerpoint 13:36:03 slides, and that happened to work, so I i'm hoping to also explain what, if you want to ship something in an industry, what's the best way to do it? 13:36:13 And, in short, the best way to do it is to build things and show it to people and show examples, and that's the best way to do to do that. 13:36:21 So. This ended up becoming more of a full-time job, and I think I built something like a 100 prototypes to test out and prove out the interaction model for the holeans to which the holland's 2 interaction 13:36:36 model is based completely on hand interactions, meaning that you can reach out and touch holograms. 13:36:42 So i'll explain that in a bit in 2,019 3 years ago we shipped the hollands to as kind of the main architect for the new interaction model architect engineer designer a little bit of everything 13:36:56 and even got to present present the work at in Barcelona when we announced the Hollands, which was really awesome. 13:37:04 Since then i've been trying to standardize the interactions in toolkits like the mixed reality toolkit. 13:37:11 Mr. Tk. worked on Microsoft mesh app which is We're not gonna have to get into that and other future facing stuff which I am not going to talk about. 13:37:23 So what I thought might be interesting to do today is to walk you through some of those 100 prototypes, not all 100, but just kind of some highlights that I built all those years ago that actually then turned into the final interaction 13:37:38 model for Holland, so that you can see how a prototype, how an idea can turn into a prototype. 13:37:45 And then that prototype turns into like an interaction principle, and then actually gets shipped into a product. 13:37:49 Because that's that's exactly the steps that that I went through, maybe to give some suggestions about how to effectively prototype. 13:38:00 And then I also wanted to so most of the time i'm going to spend on the prototypes in the hand interaction principles. 13:38:07 And then I'm. going to spend a tiny bit of time introducing Mr. 13:38:11 Tk: if if time allows, because I want to try to end it I think 2 30. i'm in pacific time. 13:38:18 So if I get the times wrong, forgive me and if I have time, i'll talk quickly about prototyping tools for Hollands some, and then I want to leave you with some maybe things to think about opportunities and 13:38:32 challenges that we still have so let's see how far we get okay? 13:38:38 Before we dive into the prototypes. 13:38:40 I just want to make sure that we're all on the same page and give a little bit of background in case people Aren't, familiar with what the hollow lens one and 2 are so the hall lens one is an 13:38:53 augmented reality. headset which means that it's a It's a well, I can have I have the Holland's 2 right here. 13:39:00 So this is a holland to you wear it on your head like this, and there's a visor here. 13:39:05 It's clear, and it renders kind of images over your world. 13:39:12 The Holland one is the same same basic idea from an interaction point of view. 13:39:20 There is, they're very, very different the holland's one the way that you interact is we call it gaze, gesture, and voice, which means you look with your head. 13:39:27 So your head is like a cursor. and Then if you want to click on something you have to do this very specific air tap gesture, and you can also use your voice. 13:39:38 So there's a cursor stuck in the middle of your head. 13:39:39 It's like a mouse cursor and it's kind of like having a giant rod poking out of the middle of your head. 13:39:44 And then you can kind of interact so the advantage of this is it uses this very accurate sensor the head tracking sensor on holland's to do the targeting. 13:39:55 And that's basically the best sensor available so it allows you to be precise. 13:40:00 Also, you know, headgase, and what head, gaze and voice interaction is fairly low. 13:40:03 Fatigue, though I can tell you from personal experience it's. very difficult to teach this air tap pinching gesture. 13:40:13 I've i've most people can't figure out how to do it correctly at the same time, and sort of tapping with the hand, tapping with your hand and aiming with your head. 13:40:23 It's not immediately intuitive. Okay, so fast forward 4 years the Hololens to the still in augmented reality headset. 13:40:34 Oh, I should mention something really neat about these devices is that they're completely contained computers. 13:40:39 It's basically a phone on your head and which means it's completely, you know, wireless and runs independently just like the oculus quest is, for example, or Yeah, the quest is for example, so it's not like 13:40:54 tethered to to a any sort of device that's in your pocket, anyway. 13:40:58 So the Holland's 2 some differences has a bigger field of view. 13:41:04 It's much more comfortable you can actually wear it all day. In fact, that's how I how I develop I just wear the hololens. 13:41:12 Write code, flip down the visor test, flip up the visor and keep keep writing code. 13:41:17 Then a big difference from the interaction perspective Well, that's completely different, because there's some new sensor data in particular, fully articulated hand tracking as well as eye tracking. so tracking where your eyes are looking we can enable a lot of 13:41:32 new interactions. So the main ones i'm going to talk about are the hand interactions and the there's a huge difference, because now, instead of sort of looking and air tapping, you actually interact with holograms by touching them 13:41:44 which is sort of what most people expect to do when they see holograms effect like the first thing people do when they put on a hall as they reach out their hand and try to touch the holograms. 13:41:54 If you're far away from a hologram you use a hand, Ray and i'm gonna talk about how we ended up arriving at this interaction, which I know it sounds very intuitive but actually there's a lot 13:42:07 of detail that goes behind making these feel good and making interactions successful, even though it seems like the most obvious thing. 13:42:16 There's a lot of work that went behind it and that's what I'm going to talk about today, so the the idea we call we coined the we didn't coin the term, but the the sort of marketing side and 13:42:28 project management side calls this instinctual interactions because it's supposed to be easy to learn and easy to use. 13:42:37 And yeah, like you already know how to use the device. Yeah. 13:42:43 So what I wanted to do is kind of the best way to illustrate what what it means to interact with holograms, since it's hard to get your hands on a hole. lens, too. 13:42:53 Still is to show a third-person view, and then a first-person view of what what directly touching holograms means. 13:43:00 So in the required reading, I see, then brought a link to this. 13:43:03 The Holeans to announcement video demo This gives you a third-person view of what it's like to interact with holograms. 13:43:11 And what I mean by reach out and touch the holograms. 13:43:14 You just basically interact with holograms, as if the the items are really there and they really are. 13:43:19 And I wanted to mention One thing about this demo is that this is all completely a fully functioning, real prototype, as in all of these interactions, are are completely that I was actually using this all of these interfaces and you can 13:43:35 tell, because some of sometimes there's bugs like if you look back. 13:43:38 I actually pressed, and the button didn't go through so that was a small bug. 13:43:43 So that's sort of the Third-person view Now, what does it look like from the headset? 13:43:49 I think this is a pretty good this is mixed reality capture It's basically capturing the view from the hololens. 13:43:56 It's like running record on the holland and it kind of shows you what it what it actually looks like to sort of experience this. 13:44:07 So That's sort of the first person view okay so Now that we've kind of figured out the sort of what what the what the Holland's to actually feels like I wanted to dive straight in I don't 13:44:20 see any questions. I see someone new joined. I wanted to just dive straight in and talk about prototypes and see quickly. 13:44:31 Say, what are the sensors? Do you have a position sensor? 13:44:35 Is there an expert camera pointing down to look for your hands or there, there's the head track. so there's the sensors to do head tracking. 13:44:43 They do motion tracking, and then a separate sensor to look at the hands. 13:44:47 It's a depth sense on the holland's tube it doesn't look down it sort of just looks out. 13:44:52 So the gestural field of view there's a great there's a great if you if you search the web for instinctual interaction design guidelines and clips click one of the top links there's a really 13:45:05 great video that shows the gestural field of view of the hollow ones, but it does not point down which can impact comfort because you kind of need to put your hands a little bit out front. 13:45:15 Yeah, Okay, So let I just want to dive right into the prototypes and start telling you all the stories. 13:45:28 So okay, this is the first prototype. This is where so? 13:45:32 And we use unity to do Hallolans mostly. I mean, you can use other things like the unreal. 13:45:36 You can use the unreal game engine you can also just directly. 13:45:40 Write in c. and use direct x api's but unity is the kind of the thing that I found approachable back when it was available. 13:45:48 So this is really cool, because this is the first prototype I ever built for Holland's, and it turned out to be one of the most impactful. 13:45:57 So this is a very this is a hollow one's one prototype, and it's very simple I'm simply simply just removed the head cursor and use the basic hand tracking to pull the corners of a 13:46:10 cube, and I I didn't I thought people were joking at the time when they they were kind of blown away by the prototype. 13:46:18 But in hindsight this actually was an amazing, amazing, very simple prototype, that kind of in 2,015. 13:46:25 That kind of led to the eventual product which was awesome. 13:46:29 So some things that were really important built this prototype and then I started sharing it across the team, and I've seen similar prototypes in the org. 13:46:39 But this prototype had a couple of key differences which I learned from the one of Don Norman's books on design, which is the importance of feedback. 13:46:48 So you can see a visual change on the cubes when the cube is grabbed. 13:46:52 And then I also played a sound which you which is a little bit hard to hear. 13:46:55 Maybe it's even muted but those 2 things are really important for communicating that the object is grabbed, and and believe it or not, other prototypes didn't have that simple thing, and I think weren't weren't 13:47:05 as impactful. The other important thing was that the corners directly followed the hands as closely as possible. 13:47:11 I removed all the smoothing code that I could find for hand tracking, so that it Yes, it jitter a little bit. 13:47:16 But the the corner was was sort of had the lowest latency that I could have. 13:47:21 So these things were really important to making this feel compelling. 13:47:27 Another thing that we noticed, or that I noticed when showing this prototype of people and listening to their comments, was that you in immediately want a larger field of view with direct manipulation. because, as you get close to a hologram, if you 13:47:38 have a small field of view. It kind of you most of the hologram, you know. 13:47:42 Kind of overwhelms your view, and you want to see all of it. 13:47:46 You kind of disappear the edges. So that was the kind of the first prototype. 13:47:52 This was a using the holland's 2 interaction the first prototype that showed this transition between near and far interactions. 13:48:01 So the idea here was you're scaling a cube and when you get close, the the cursor disappears, and you just use your hand. 13:48:12 So what we learned from this prototype was that this kind of way of transitioning between near and far interaction seemed to work just fine, and that it's understood by users. 13:48:24 However, it could be a little bit difficult to tell when you're transitioning from that near interaction to the far interaction mode, because the cursor was a little bit subtle, so people sometimes couldn't tell whether they should be 13:48:33 using their head or their hands all right, going on to the next one. 13:48:39 So those 2 were all done with Holland. This is also still a Hololens. 13:48:44 But now we start playing with actual articulated hand interactions. 13:48:49 We wanted to kind of. This was the first case where we sort of one wanted to know. 13:48:56 What if you could manipulate controls using your hands? 13:48:59 Would that even work at all. What Controls would feel good which ones didn't feel so good. 13:49:03 So what we did is we thought of. We work with a designer to think of as many ways as possible, to press buttons, turn knobs, and, you know, move switches. 13:49:16 And we tried to think of the different, the different affordances or the different ways that you could turn a knob. 13:49:22 So you can turn a knob by grabbing it and twisting with your hand. 13:49:26 You can also pinch a little a little handle and move it around. 13:49:28 Which of those works better, and we kind of wanted to find out which. 13:49:33 Where did the sensing fall apart, or where did the hand tracking, sensing kind of limit? 13:49:38 How well that interaction could work. So we built this prototype again over and over. 13:49:44 The the same pattern applies, which is, build a prototype as quickly as I can, and then show it to people, get feedback, improve it, and really just take a lot of notes. 13:49:56 Just show it to everyone you can, and then listen to them and don't be afraid to show your early work, and that worked really well. 13:50:08 So some things that we learned from this prototype are that people really like the pinch Slider and the push button, and also the the switch that you could flick kind of with your with your index finger. 13:50:22 However, so we started to notice some challenges with force with the lack of haptic feedback, because you're basically touching the air because there was no like feedback to keep your fingers in place when turning the knob for example, it 13:50:35 was easy to accidentally kind of grab the knob and then turn your hand, and then disengage with the control. 13:50:43 So I had to add a lot of kind of hysteresis. 13:50:47 So once you grab the engagement, the disengaged volume would have to be a lot larger, because there's no physical, nothing to physically keep you from kind of moving your hands out of the The grab. 13:50:58 Volume. Another thing again, because there's no haptic feedback, light and sound are really important. 13:51:06 So visual cues for when you've pressed a button, and also a sound like really critical, and, in fact, one of the feedback that I got from this prototype was that it was hard to know when you were engaged with 13:51:17 the control, because it was a little bit too subtle. 13:51:20 Sort of the hover feedback was too subtle. 13:51:22 It was just a color change. Something like a light was what people were expecting. 13:51:28 And then another thing that we kind of started to notice with the knob rotation is that it was easy with the knob to get into your hand into an orientation, where the hand tracking starts to fall apart, the jitter a lot 13:51:43 so, even though people really, in their minds initially liked the idea of rotating a knob with their wrist, and in practice that turned out to be a tricky interaction because of that lack of haptic feedback and because it would put your hand into an 13:51:57 orientation that's not sensor friendly as as I as we call it. 13:52:02 So designing these really simple interactions that are based on pinches, movements, and pokes. 13:52:08 Are ways to make these interactions sensor friendly. So yeah, okay. 13:52:17 So from these initial prototypes we started to get some basic ideas that ended up turning into a fairly simple to think of design principles, which is the to press buttons by directly touching them with your finger again the 13:52:33 importance of providing visual and audio cues. 13:52:36 Because of this virtual haptics, because there's no real haptic feedback, you need to very strongly compensate by providing a lot of visual cues and a lot of sound. 13:52:47 So if I make prototypes nowadays, I try to like, crank up the knob as much as I can on the visual and audio feedback so over overdue it, and then and then pull back, versus 13:52:58 not having too much feedback. and then people will feel like you're Your prototype is broken all right so moving along so that's threed Ui Controls. 13:53:09 How do you essentially? How do you press buttons and move sliders? 13:53:12 And then the next thing we wanted to think about was So we each of these the prototypes. The goal of the prototype really was to was to ask, given, you know, articulated hand tracking are you able to even poke 13:53:24 buttons and moose sliders, because we didn't really know if that was possible. 13:53:29 So then the next question is, can you manipulate objects well enough to be somewhat, you know, to allow you to move your windows around. 13:53:37 So that's what this prototype did here we have different ways to grab a crane push a cube, and we were trying to compare sort of grabbing and rotating to more of a foe fake physics. 13:53:52 Style emulation. where you push a cube in this case the prototype had a task which I would, i'll mention, was a very good recommendation for doing a prototype is give your users a task to do it give people 13:54:05 a task to do that'll really force them to understand to give you good feedback. 13:54:12 So what we learned is that while pushing the cube around did feel quite fun and delightful, it was quite difficult to actually get the cube in the destination compared to the crane. 13:54:26 On the other hand, when you constrained the physics, so you only let a slate. 13:54:30 This like flat cube with a picture of some news on it. 13:54:34 Rotate along one axis that actually felt it felt both controlled. 13:54:40 And it was kind of fun because you got to push the object around. 13:54:43 So we learn There is sort of adding constraints to your physics or to your pushing. 13:54:48 Can kind of allow you to be more controlled. Then the other thing that we kind of learned was that there were when you, when we do manipulation by grabbing and rotating with the wrists that hand tracking can break down in many 13:55:01 cases, and the really awesome thing about working at microsoft where we do both the hand tracking and the interaction is that we then could tell this to the hand tracking team and give them examples of rotations where the tracking wasn't 13:55:14 good, and then they could go and improve it a little bit. 13:55:19 So kind of one of the takeaways was that in order to ship hand interactions, the hand tracking needs to work in a wide range of poses, not kind of in the best case posts. 13:55:31 And this is my one tip that I wanted to mention for students. 13:55:34 When you are working on your own prototypes, and when you go work at companies and do interaction techniques is to give people a task in your prototype. 13:55:45 So don't just in this prototype it would be a lot less successful. 13:55:49 If I just had the crane, and the cube and the slate, just as is because then you show it to people, and they kind of move it around a little and say, Oh, that's cool when you give them a task like put the 13:55:58 crane in the cube. Then it forces people to think a little bit more deeply about how something actually feels, because sometimes they can't successfully do it. 13:56:08 So it was a really simple thing, but made a huge difference, and was something that people at Microsoft Weren't doing before I came along. 13:56:17 They sort of just they. The favorite demo was just Yeah, have a crane and rotated in your hand, and people will say that's cool. 13:56:23 What a great prototype! So, having a task allowed us to find more problems which was good. 13:56:32 Oh, I see, hand! A question in the chat when gripping objects and the fingers are out of view of the sensor. 13:56:39 How does the Holland handle whether the object is still being held? 13:56:42 Oh, that's a great question. So this was one of the things problems that we identified as kind of the prototype would identify this this breaking case of Yeah, you, especially with large objects, if you grab something and move to 13:56:56 resize a large object. The hand leaves the field of you tracking. What do you do? 13:57:01 Because the hand tracking data drops. So we sort of had 2 options. 13:57:05 We had a couple options, and that we prototyped actually. 13:57:08 So one was just to drop, to immediately drop the object, the other was to drop it after a certain period of time, and the other, the final one was to always, you know, keep the object being grabbed. 13:57:20 So the keeping the object grabbed definitely. didn't work because then you have, like stuck objects around. and what we ended up. 13:57:29 Shipping was a small time-out, so if you leave the field of view for a couple seconds, it will stay grabbed 13:57:35 Then, if you bring your hand back in field of view it'll it'll kind of retain grabbing, but also 13:57:43 But the time is not very long. so it almost feels like you'll drop it when you leave the field of view, and this is kind of one of the nice things about motion controllers which have don't have this issue as much although still still 13:57:53 do if it's a similar tracking system to like the Hollands where it's inside out. 13:58:02 Yeah, and that's why it's important to build prototypes, and not just design things in your imagination, because you wouldn't think necessarily of that breaking case until you start building it. 13:58:12 And we just the that's happened all the time in at work in building these prototypes. 13:58:18 We would find all of these things. we didn't think about because the prototype we discovered them through the prototype. 13:58:26 Yeah. a question in the room. Oh, yeah, this is for one of the earlier part time to show it was like scaling rotation. 13:58:33 One like, Why did you choose to like smooth it? 13:58:37 As I saw it. It was kind of like like the movement the other movie was kind of like moved out. 13:58:42 Oh, yeah, the we smoothed it to give to make the basically we would always smooth and interpolate the size or the rotation of the object to the hands last position. 13:58:58 And that was for 2 reasons. One is that the hand tracking data doesn't come in at 60 frames a second. 13:59:04 So it comes in. It actually came in, I think, like 3040. 13:59:09 So if we it would look very jittery so it was mostly to make the demo in this case to make the demo kind of look better because in in in the scaling and rotation case, it's not directly following the hand like 13:59:23 the minecraft cube. it directly follows it hand. 13:59:28 But in the other case the scaling is a fixed scale. 13:59:31 So would do that again. I would probably smooth again, and in effect, an Mit can. most Holland's demos you. They always do the 60 frames a second smoothing to make the to make it kind of yeah to make it. 13:59:43 Look better, I guess, so that it's not like jumping around and feeling jittery. because if you look at this one, this one's a little bit jumpier. Also, I in this prototype. 13:59:55 I. The data, I believe, is coming in a little bit faster. 13:59:56 I'd like changed some values in the holland's registrated to do that. 14:00:01 So I basically kind of, and then I wouldn't call it cheating. 14:00:04 But I just went a little bit lower down into this system. 14:00:08 Thank you. Okay. So kind of going back to object manipulation. 14:00:19 What are the what are the principles that we could kind of take away from that prototype? 14:00:24 Well, I mean again, we're very simple grab objects directly with your hands to manipulate them. 14:00:29 One thing i'll talk about this a little later that we notice is, people use different hand poses based on the size of the object that they're interacting with. 14:00:36 But you can use affordances to trick people into doing sensor. 14:00:42 Friendly hand poses like small affordances. then people will grab the small affordances. 14:00:47 The other thing that we learned that I didn't quite didn't really talk about here too much. 14:00:53 Was just that with using your wrist like I am here it's hard to be precise, especially because if you're using the wrist rotation even with lots of smoothing as you let go the sensor data usually rotates 14:01:06 the palm a little bit as you let go, so like when you release the object will rotate a little. 14:01:12 So if you need things to be really precise which can be the case in a lot of like, if you're trying to align a line, an object, an industrial setting, like you're trying to map a line, a virtual object with a real 14:01:25 object. it's important to be precise then this bounding box where you're constrained to like scale a rotate. 14:01:30 You can only change one axis at a time that's kind of the preferred recommendation. 14:01:37 So don't if you if you want to kind of just grab and rotate things with the wrist that's great for inspection, and it can feel really cool. 14:01:42 But if you want to be precise, use a bounding box, then for physics. 14:01:47 The recommendation that we had was physics can be really fun. 14:01:52 But again, if you need to be precise, add some constraints, so only allow pushing in 1 one axis or rotation on along one axis. 14:02:00 However, I wanted to have a little aside and show you a really awesome prototype that a colleague named Oscar Saladin made it's called touching holograms. 14:02:12 If you if you search online for touching Holograms you'll see a medium post from Microsoft design by Oscar. 14:02:20 That shows this Really, this is on holland's to this really awesome prototype, that kind of a couple things here that are that are really cool. 14:02:28 So he basically just put colliders in the game engine on his hands, and then just ran unity physics. 14:02:35 But he did a lot of nice design details. So one is, he added, Hand occlusion. 14:02:42 So the i'm just going to replay it so you can see that his hand renders is black, so that he can really see his true hand. 14:02:52 But at the same time he's also moving his hand quite slowly in this video, and that's why the hand occlusion works is because he's moving his hand slowly. 14:03:01 If you move your hands quickly. you'll quickly see that there's a really big lag kind of highlighting the tracking lag, and which is one of the one of the reasons that we didn't end up shipping 14:03:11 it because we we really explored hand occlusion quite a lot for Holland's 2 we didn't end up shipping it. 14:03:21 So the other thing that's really good about this demo is his use of light. 14:03:24 He has a point light inside the cube that gets brighter whenever there's a collision between the finger and the cube, and it just has this really great effect of communicating contact. 14:03:34 The So I just think this is a great example of how awesome these interactions can look, and it's a it's a really great post to look at. 14:03:48 Let's go to the next slide I think this is yep great, all right. 14:03:57 So actually, that was the easy stuff. Pushing buttons and moving things was kind of obvious. 14:04:04 But with the windows product we have a lot of twod content. 14:04:07 So like the settings settings, App is basically a twod window that you interact with. 14:04:13 The question was, Can you use fully articulated hands to actually successfully interact with this content? 14:04:21 So what we did is we built this prototype where you can scroll zoom and click on things, and we just gave it to a bunch of people and wanted to see. 14:04:30 Can you actually do these interactions successfully so some things that we learned here are well for one, it's really important to have a cursor, and i'll talk about that later, because without a cursor and without hand occlusion 14:04:43 it's very hard to perceive the depth and to know when you're actually contacting this thing that's in the air. 14:04:48 And again the rendering. When you, when you have a small field of view, and you render like white, it can be quite overwhelming. 14:04:55 So the this is actually one of the one of the reasons why I think we went to default dark mode in the end is that when you get close to a big big screen it can feel a little bit overwhelming and one thing 14:05:11 that we noticed is that for the pinch and zoom interaction, we sort of have 1 point of interaction per hand at the index finger, and the reason for that is to avoid false positives, because your hand can go right 14:05:23 through and contact all 5 points and create chaos in your interface, even if you in it. 14:05:31 There's a lot of we experimented with having all 5 fingers as contact. 14:05:34 And again it's it's good to have as an option, but for sort of controlled interaction, we picked one. 14:05:40 But the problem with that is that then, zooming, you have to do with 2 hands where you do like a swimming motion to zoom, and some people wanted to do that, but others wanted to just do a pinch to zoom so we tried to 14:05:51 implement that. And again, what we learned is that it can be very hard to make that feel controlled because there's no physical barrier stopping your hands. 14:05:59 So what happens is both your fingers kind of go in to the into the virtual screen, and then they come out at different times. 14:06:07 And so as you're pulling your hand out things zoom like crazy or pan around. so it's a it's you know. 14:06:15 We spent a while trying to figure out the math and all the tricks to make this work, and it's really challenging. 14:06:18 So that's why we went with sort of one finger So for the keyboard we actually found, so we had a and I didn't talk about this here. 14:06:29 We actually had a keyboard that you could hunt and peck sort of type as well, and identifying, but identified a bunch of tracking challenges. 14:06:35 That hand tracking would need to solve that they did. and we actually ended up shipping a keyboard that you can kind of kind of use your fingers to directly type instead of the hall lens. 14:06:44 One keyboard which was very hard to use. You had to like look and click one at a time, and of course people all wanted to type with all all their fingers, which is a really great area of research, if anyone wants to to try to crack 14:06:57 that problem. it's it's. a pretty interesting have some ideas there, and I've seen i've seen some stuff around in air typing with all 10 fingers. But it just wasn't reliable enough to implement yeah Okay, next page 14:07:13 So yeah, we we kind of this prototype proved. 14:07:17 Yes, people can do it. it's kind of the first thing people think of to do. Then just bring that metaphor into the hollow lens as well for interacting with Tv content. Just use the touch screen interaction metaphor you know touching 14:07:33 scrolling with one finger and then zooming with the index fingers or a different yeah finger Yeah. 14:07:37 So talk about this single point of interaction per hand, and just quickly, for folks that don't have an index finger. 14:07:43 Usually the tracking just then snaps the index finger to a different finger. And for you know, if if no fingers are available, then we do provide a way to interact, using voice there's voice commands to select items and 14:07:57 scroll page down panel and such so on so what's funny about 1 point of interaction per hand is that people actually don't realize that it's 1 point of interaction. 14:08:08 So when they'll press their whole hand, in and they'll think they're using their whole hand to press when really it's just the index finger. So that's kind of a fun. trick. 14:08:17 And my favorite example, for why, to use 1 point of interaction is as using hand physics. 14:08:20 Lab and they have their mode as you know all fingers can press and it's very hard to be precise. 14:08:28 So as i'm typing i'm hitting all the other keys. and that's exactly the problem that we hit when trying to allow all fingers and we tread all permutations of which fingers to turn on and off and trying to 14:08:38 dynamically turn them on and off, based on only, you know only when the first finger collides through the the keyboard turn off all the other fingers. 14:08:47 We tried that, and many, many things, and we just again, as as often seems to happen. 14:08:52 The simplest thing is the one that ended up working the best. 14:08:56 So yeah, I did want to mention that the finger cursor is really important, because especially when you don't have hand occlusion, because without hand occlusion, it's really hard to when you're getting kind of 14:09:09 close up that depth perception can be really challenging. 14:09:12 So, having feedback that communicates proximity of your hand to the content is very important. 14:09:17 So this is all implemented as a custom shader that uses the finger positions as the inputs. 14:09:25 So there's a couple there's a lot of detail in the cursor for one. it's shaped like a ring, so that you can see what's underneath your finger so that the the center of the ring 14:09:34 doesn't include the actual content second the cursor there's a shadow underneath, so that you can use the kind of convergence of the shadow which and the shadow changes opacity as you get 14:09:48 closer in size, so that you can use all those signals to know when you're actually contacting the object. 14:09:55 And in this prototype yeah you can see the shadow fading in and out as you get closer and getting blurred. 14:10:02 Another important thing is it's important to render a contact pulse and a sound. 14:10:06 When you do contact again, we're kind of cranking up all of the visual and audio cues because we don't have the haptics to to give us real real signal. 14:10:16 So yeah there's a finger shadow that changes size and color as you get closer there's a ring that helps you do targeting. 14:10:24 And then there's a contact pulse visual and sound, and without this it can be very difficult to interact with twod content, especially fact. 14:10:35 A story I like to tell is that kind of somewhat close to when we were announcing the holland to we didn't have any sort of cursor implemented, and I just got so much feedback people Were planning to 14:10:47 just cancel the whole idea of touching holograms and go back to Hall Lens one because they thought hand tracking was completely broken, because they just could not navigate settings to like get their ip address, and they just thought everything 14:11:00 was completely broken, and we just made one change, which was, We turned on the cursor, and then I heard complaints. 14:11:07 So I went from hearing, like very concerned complaints from upper management to nothing. 14:11:13 So when people aren't saying anything that's actually a good sign also shows you the importance of like this one detail, because the hand tracking didn't change, but we just added a cursor, and then people started to be able to 14:11:25 successfully complete their tasks. it's funny because people always think that what was wrong is the hand tracking. Yeah, they usually blame the hand tracking data, but sometimes it's like the visual design and the feedback Anyway, so this 14:11:38 is the finger cursor design engagement, a lot of detail. 14:11:40 What goes into that. So i'm going to go through these kind of quickly, because I I only have 15 min left. 14:11:48 So I wanted to also talk about a kind of more of an elicitation industrialization study to understand what hand poses people use when interacting with holograms. 14:12:00 So what we did for this study is we had a bunch of holograms in the space, and they didn't react to movement. 14:12:07 We just asked people, How would you rotate this fish? How would you interact with this keyboard, near and far, with the goal of with the goal of identifying the different hand poses that people used for ground truth data collection with the hololens for the 14:12:23 hand tracking team. Okay, So what do we learn? The main thing that we learned is that the absence of control points or little affordances that you can grab. 14:12:37 People basically treated objects like they were the real thing, so they would try to pick up the gem with 2 hands or kind of frame the gem as if it was a physical interaction. 14:12:46 And there was. It was really funny, because for large objects they would like try to hug the house, which was, would never work with the hand tracking constraints that we have. 14:12:56 But if you show affordances, then people just instantly pick up that they need to grab the corners or the handles. 14:13:03 And then the really great thing is when you hide those affordances later. 14:13:06 So we kind of switch the order of when we showed affordances. 14:13:10 Then they people are very smart, and they remember that those affordances are there, and they just reach for the imaginary affordances. 14:13:16 So this is huge for us because it's showed a way that we could get people to do these sensor-friendly interactions which are sensors like it. 14:13:28 When you have your hands kind of facing you and do these pinches they really don't like vague grabs. it's hard. 14:13:34 It's really again physics is amazing but it's it's very hard to implement just a generic grab when you can grab in a 1,000,000 different ways. 14:13:43 So yeah, handles work very well, and they can. People immediately figure out how to move their hands to grab the handles. 14:13:53 So we we use that in the design of the bounding box in Hollands, too, which kind of the affordance says, appear and disappear, so that it doesn't visually clutter your space people Just remember to use the 14:14:07 corners. then the next. I like to talk about this, which was which we did for the second part of the study was we built a shell that used every single gesture that people kind of did in the elicitation study, so like pick 14:14:21 something up with 2 hands from the sides or underneath and tried to see. 14:14:27 Basically we were trying to prove, can you, with direct interaction, build a functional shell which was, Yes, but from this prototype we learned again that control points are really good at guiding behavior. 14:14:43 And another thing we learned is from an implementation point of view. 14:14:46 The code gets very complex when you try to build every single possible way to grab something. 14:14:53 So if you can grab something with 2 hands, but also for frame it with your 2 hands, or grab from underneath. 14:14:58 You get into a lot of cases where, as your hand approaches, the State machine is ambiguous because you could be doing a two-handed grab or a framing or grabbing from the bottom, so this is a really 14:15:10 complicated implementation in a prototype which means that If it was to go into a product, it would be very, very hard, not very tractable, because even for a simple set of interactions, you'd be surprised how many bugs come up 14:15:23 in in practice. So we went with just the simple graph affordances and move them around interaction. 14:15:33 So talking about, so that's near interaction, now I left only 13 min for myself for far interaction. 14:15:42 So the funny thing is near interaction you can talk about for a long time, and it's actually the easy problem. 14:15:50 Far interaction is the really hard thing, because there's no like real-world analog. 14:15:54 For How do you interact with something at a distance? unless You're using some sort of a tool? 14:16:00 So the challenge was okay, something far away. How do you manipulate it with your hands? 14:16:05 And we tried many, many, many, many different interaction techniques. 14:16:11 And I wanted to explain to you why you each of them didn't work, and why we ended up with this hand ray. 14:16:15 That's kind of array that goes from your shoulder to your palm. 14:16:20 The short answer is, It was the most stable stable solution. 14:16:23 In other words, it let you select things reliably. so. 14:16:28 One thing we tried was fingerprinting so that's casting a ray from your eye to the tip of your finger, which is fantastic for quickly pointing, and it's so low fatigue and it kind of 14:16:37 you can immediately point to the object you want to to use. 14:16:41 But the problem is, how do you select? Well, you can use your voice, which is great, but we needed a way to select without using the voice. 14:16:50 If you do a pinch which is our favorite because there's that haptic contact between your index and thumb and well, you'd be surprised how much the cursor moves as you're doing a pinch it 14:17:00 was very hard to select. I had this task where you have a ring of buttons. 14:17:04 You just want to select them. And it was just just really really difficult, which, by the way, again using a task in a prototype. 14:17:11 We had that in almost all of our prototypes. 14:17:14 Okay, So the next thing is pointing with your wrist which is technique, where we're just basically cat the ray that you're using to do a cast is based on your palm or wrist orientation whichever joint is the most 14:17:25 stable. Usually, in general, you want to pick something very stable under pinch conditions. so we just kind of want to look for the joint. 14:17:32 That's the has the highest civility or moves the least, and this is kind of what everyone expects most times when we show them the hand, ray. 14:17:42 They asked me. Oh, why didn't you just use the wrist you can just point down at things, and one advantage is that this allows you to point down at things because the hand rate that we shipped doesn't point down super well, But 14:17:56 again the disadvantage, as you can see is as I'm. 14:18:00 Doing a pinch or grab the ray jumps a ton, and yes, we did try freezing. The ray, like detecting as a pint was happening, and freeze the ray orientation. 14:18:07 It, and it kind of worked. But then there would be cases where it didn't work. And again, we you end up picking the thing that that's the most reliable for shipping a general shell interaction for specific like 14:18:19 applications or games. These techniques can actually work very well. but the constraints we had as a head to work in in general. 14:18:26 Okay. I wanted to talk about the flying hands, which was favorite of mine, but I really wanted to do was, well, what if you could? 14:18:33 Just okay, so you can near and Iraq with your hands. 14:18:37 And and then, when you go away, use the exact same metaphor of directly interacting. 14:18:42 But your hands just fly toward the object. And this was actually can work really well in like highly constrained environments. 14:18:52 So like a game or something where you you can control precisely where you're user controls are in fact, if you have fixed control, you can create a mapping between like when you look at an object where does the hand fly to and you 14:19:05 can hard called hard code all those things, but in a general shell where you can move things around. It was actually pretty hard to come up with a general rule that worked well for where to fly the hands to So we we didn't go with that 14:19:23 also some people didn't feel comfortable with the disembodied hands, although I think that's just an artifact of the art treatment on this prototype, which is just it was a dev devart 14:19:33 prototype. So what do we end up? Shit shipping? 14:19:38 This is actually kind of i'm seeing in other platforms too, the quest uses something similar. 14:19:42 They just try to detect when you're standing and and move the shoulder point, basically up or down. 14:19:48 But this is a cast away from a approximated shoulder based on where your head is, and then the palm joint, or a different stable joint on the hand. 14:19:59 And again we shipped this largely because the ray is stable. 14:20:02 When you're pinching there's some downsides to it. 14:20:06 For one. it's requires a lot of movement of your arm. 14:20:09 You have to kind of raise your hand up and it's really hard to target an item that's on the floor. 14:20:16 And it's funny, though that this is the kind of it's kind of the least bad solution that we could find, because everything had disadvantages. 14:20:23 There's. no one. Yeah, there's no one great solution and it's interesting to me that other platforms are starting to use this technique, too. 14:20:32 And also, you know, this was on the leap motion as well. 14:20:34 So leak motion. had this. the quest had this solution: Yeah. 14:20:41 Okay. So the the really nice thing, though, about this ray hand ray near metaphor. 14:20:45 Is, it matches nicely. The motion controllers are 6 degree of freedom Controllers and Vr. 14:20:50 It's the same technique kind of that they have so you can imagine that that can provide some nice when the interaction model matches. 14:20:57 You have rays at a distance, and just directly grab nearby. 14:21:03 You can do really, really nice things with those controllers and hands matching. 14:21:06 Okay, So that's kind of really the interaction model with hands in a nutshell that directly grab objects and interact prefer pinching and moving and poking. 14:21:16 Oh, pressing, we call it pressing press buttons grab and move moveable objects, and then a distance use a hand ray. 14:21:25 That's it, and I think that one reason why it shipped was that it was you. 14:21:28 Could you could describe it in that many words it's like some fairly simple. 14:21:33 I think if there was something more complex it would just be hard to teach, and then not successful, because you kind of want people to be able to just walk up to walk up and use it. 14:21:44 And the really cool thing is like When we were in Barcelona announcing this and showing to people it felt so great that people put on a hall lens, and the first thing they had to do is press a button. 14:21:52 And many of them successfully press the Button not everyone because you'd be surprised how hard it can be that you can get that working well. 14:22:01 But compared to the hall ones one it was just really awesome to see. So I quickly wanted to share with you a useful tool that can be can be great for prototyping all sorts of hand interactions. 14:22:15 Not just with the hall lens, but with a leap motion in Vr. 14:22:19 Or even the quest it's just a the mrtk it is something that Microsoft provides. 14:22:26 But I also think it's something that's like has a high level of It's a user control library with fairly high quality in the input experience. 14:22:31 I can't say that the api is the easiest to use. 14:22:35 There's a lot of legacy reasons for that but it basically we spent a lot of time trying to make the feel of the button pressing and the movement match. 14:22:43 What's in the shell so a lot of works put into making it feel good, and providing a lot of like buns that you can just drag into your application. 14:22:50 So, Mr. Tk is the thing to remember in your in your brains and it's just yeah it's a toolkit that lets you build you eyes and lot of the you know we kind of see day it's for microsoft it's not at 14:23:07 all for Microsoft stuff. We just showed, you know, Hollis to Hall Lens, One and Vr. 14:23:12 But it also works for the The other vr headets. With the leap motion as a hand tracking it works for magic leap, and it also works with a quest which is really awesome. 14:23:23 This is kind of a sizzle, real for all the things you can that Mr. 14:23:26 Dk: provides. So we have buttons. piano keys are also buttons, you know, touch interactions built in. 14:23:35 What I really love about it is that you don't need a Holland to actually build anything in fact, you can build your whole app using the no hardware at all. 14:23:43 Simulating everything with your with just a mouse and keyboard you can simulate all the hands, and then the build it, and just run it on Hololens. 14:23:50 And, in fact, in the early days it was really hard to build things for Hollands. 14:23:55 So we did almost all the iteration in the editor. 14:23:57 I then built it once or twice to make sure I ran, because it would take a really long time to build. 14:24:02 Nowadays things are better. But yeah, you can see all of the controls and all the prototypes that I talked about are here, and the big reason for that is that I was implementing all of these things after working on the 14:24:17 prototypes. It was time to put them all in. Mr. 14:24:19 Tk: and it was really awesome to have these prototypes, because I could point to them for people I was collaborating with and saying this doesn't behave like the prototype. 14:24:27 You know, make it behave like the prototype and so on. 14:24:30 And there's been a lot of really great collaboration and new controls built from other people as well here, especially on the visual side. 14:24:41 It's really really nice. Okay, So I wanted to quickly talk about rapid iteration using unity game engine, which is, if you're into Ar and Vr. 14:24:51 I would highly recommend learning the unity game engine. So the sort of in editor input simulation is really useful. 14:25:00 You basically can move around with the wazzy keys and then just like, press down the right mouse. button, and there's a hand, and it's literally the hand data that comes in when you're when you're using your 14:25:11 articulated hands as well. So, you can build everything without even having a headset. 14:25:19 And the other great thing is, you could, if you have, like a Vr. 14:25:22 Plus a leap motion, for example, which is cheaper. 14:25:25 I know the Hall lens is very expensive. unfortunately I don't control that. 14:25:33 Yeah. So you can sort of build your whole Ui with that. 14:25:35 And then just deploy it to a different target and everything's abstracted away, so that that can work pretty well. 14:25:42 So here's just going off that you can do everything just with the mouse and keyboard simulation in the editor, which is really important in case you're building prototyping tools for other platforms in the future to have a way 14:25:53 to simulate and test everything in the editor which gives you like really rapid iteration. 14:25:57 Times I was really shocked when this wasn't implemented before I got before I got there. 14:26:03 It's like you would have to deploy everything to Holland to so like one of the first things I did was I built the Internet in editor simulation. 14:26:11 Yeah. So also, for people who are familiar with unity, how part could you get without writing code? 14:26:18 Oh, well, a lot of these things are like all the controls are pre-built. 14:26:29 So if you just wanted to put the cheese and get a bounty box around it, you import a cheese model. 14:26:34 Then you add the bounding box component to it in the editor, and then you can do a lot with physics, collisions, and triggers, so you can trigger animations. 14:26:45 So like you could when a collider enters another collider raise a they're called. 14:26:52 I forget I forget what like an event, but not not actually a coding event. 14:26:58 Then respond to that by doing an animation. for example so i'd say that this is quite designer friendly so just really quickly, for people that do, if they you know, get their hands on a holland's my I just wanted 14:27:13 to share, because I don't know if everyone knows this and it's important. 14:27:17 The holographic remoting player so keep remember that name holographic remoting player. 14:27:25 It's a really useful way to iterate quickly without having to deploy to a hololens. so like the the workflow before this existed was you had your unity solution. 14:27:35 It took about 5 min to build a visual studio solution, and then a couple more minutes to deploy to Hall Lens. 14:27:41 And you would have to do that, like, you know, up to 10 min every time you wanted to change something and see it in a hole, lens. 14:27:48 So with holographic remoting which is now enabled by default in unity, you can just run holographic remoting on your Hollands, and then hit hit play and unity and 14:27:58 you'll see your game in the hall on ted set so it's really really handy. 14:28:06 So just holographic remoting I just wanted people to to know that name, and that that exists in case you're ever find yourself prototyping with Holland's. 14:28:17 So to conclude, I just thinking about the you know 4 things to remember from this talk that it's very important to compensate for the lack of haptics when you're doing in air interactions by using visuals and sound 14:28:34 so like, Crank up the visual feedback, crank up your sound, Communicate the state, hover state contact, state, moving state of your objects with a lot of visuals and a lot of sound. 14:28:45 And whenever possible, use familiar interactions like pressing grabbing to move. 14:28:52 And also I didn't touch on this but I very much or at all. 14:28:58 But I will say that ar is not the same as Vr. 14:29:01 If you take a Vr game it and you just map it into ar. 14:29:05 You might think that everything will will work the same but it really Doesn't, because you can see your hands so like the late in Vr. 14:29:13 You can hide the hand tracking latency because you can kind of render you the hands to match the video that you're seeing. 14:29:22 But you really can't an error because you can see your actual hands. 14:29:25 So it makes things a lot harder in a way, and also scale and size. 14:29:32 And the tasks that you do are different in Ar, because you have, you see, the real world. 14:29:37 So like. For example, we did some prototyping in Vr. 14:29:40 And I would ask someone to move something somewhere, and they would always kind of move. 14:29:43 It to this. they would never really move it very far in vr they'd kind of just rotated around a bubble. 14:29:51 But then, the minute that we did the prototype we put it into ar, people would try to place objects on top of furniture, and so they started doing different tasks. 14:30:01 So that's kind of interesting then I guess just the last piece of advice for anybody that's that wants to work on interaction techniques, and in general is just to always be prototyping especially if you go into industry 14:30:20 the best i don't know I found that the best way to influence people, and industry is to just when you have an idea. build a prototype of it, and show it to people and you'll be amazed how quickly if it's 14:30:31 a good idea how quickly it'll like spread through the idea will spread through the organization. 14:30:36 If people have something that they can try and importantly share your work and be open to feedback, and actually listen and write it down and and change things based on feedback, That's literally what what I did to ship pollens to So 14:30:54 for folks thinking about. What can I do to contribute for the interaction techniques? 14:31:00 Perspective definitely interacting with objects at a distance is not we Haven't found the best interaction for that, and also enabling hand interactions with physics like hand physics lab and Vr. 14:31:14 But we'll make that work in ar would be a really awesome big challenge, and also a lot of these interactions are kind of high effort right now. 14:31:21 So thinking about how to make them low calorie or low effort ways to manipulate objects is a big area. 14:31:28 Personally, I we don't have any It still feels like wild West in terms of ux control standards. 14:31:36 It'd be awesome to see better standardization of interaction techniques in Air and Vr. 14:31:42 And the iteration speed has gotten a lot better with holographic promoting player. but I still think it can be. 14:31:49 You know something to improve. Okay, that's all i'm open to questions I wanted to mention. 14:31:54 If anybody's graduating soon and to let you know that we're hiring in the mixtureality team, and that the best way to find all the open jobs is to search for mixed reality in quotes mixed reality not 14:32:09 hollow lens. The key word at microsoft We have a lot of lingo is mixed reality in quotations, careers that Microsoft Com and there's I checked this morning. 14:32:19 There are 118 jobs posted, But you got to do it in quotes, and that's it yeah open for questions. 14:32:32 So we do have time for questions. 13 min. People Oh, remotely or local. 14:32:40 That's something. i'll get started one of the things that I thought was interesting is the end of the interactions was really crucial. 14:32:53 When things were moving by accidentally, you tried to release your fingers, and we were trying to your hand in the right place, and then to do something. 14:33:04 Whereas with the mouse, nobody much thinks about kind of the end of the interaction, and we kind of saw that years ago, when we were trying to do laser pointers thing, that the laser pointer was 14:33:15 totally designed. So the question was on the side which and whereas values were carefully designed, so we can question the buttons they don't move. 14:33:26 Do you think there's something that could be done to kind of avoid having to think about the end of the interactions? 14:33:37 Or is that really fundamental with trying to use stakeholders? 14:33:43 I think that Well, if we could have hardware like a button, but then there's a challenge with the buttons, which is that those then the tools aren't always worthy you have to like pull out a button like you 14:33:53 know a ring on your finger, or something like a watch that's a two-handed interaction. 14:34:00 So if you had a piece of hardware then you wouldn't have to think about it. 14:34:02 The other thing is, what could be done is at the Api level to provide a time of release. 14:34:11 So we have like an up event. Well, you have a actually I believe that. 14:34:17 Is there a slug, anyway? it doesn't matter getting into the weeds? 14:34:18 Imagine that you have like a release event where you know, when the buttons are released. 14:34:22 If the if at the lower level, the hand tracking team can provide a like hand pose at release that looks back in time to when the fingers started started to open and or does some extra sensing, smoothing or something that then at 14:34:44 the interaction layer, you wouldn't have to think about it, because then, when you get the event, you can match the hand pose to that time back in time, and then it would be consistent across all applications a little caveat so 14:34:57 the hand Ray is for a long time. we weren't going to ship that at the Api level, So then that means that everyone would have had their own implementation of a hand ray which would have like made the interactions at a 14:35:08 distance, especially challenging, because everything would be a little different. So I think that the one way to make you not worry about releasing is that if we solve this once at a lower level, and then everyone can can use that that would be like my best guess 14:35:28 that, or a piece of hardware back to the films. 14:35:36 Here are people on. I see our question asks. 14:35:42 Do you ever see ar interactions taking over complex workflows like Photoshop video, editing, etc. 14:35:48 I do. I think that what I see is a fusion of using ar with the desktop it's like with a mouse and keyboard, which is a another fun which is actually fun really fun to think about and as in case 14:36:03 you don't know this. you can actually pair a bluetooth mouse and a bluetooth keyboard to a hollow lens. and, in fact, that's how when we're prototyping we 14:36:12 sometimes quickly change conditions like I would do this all the time in my prototypes, using keys like Press the k button to change between modes. 14:36:22 Because with the complex workflows you kind of need precision a lot of precision, and some mouse and keyboard is like well, I mean really good. 14:36:31 With that, or even yeah, I mean there's sort of the least fatigue, high highest precision inputs. 14:36:41 And I see, so it can be really useful for complex workflows, because in complex cases it you can benefit a lot from having lots of things around you. 14:36:50 So imagine having your you're working on a bunch of screens screenshots for a mobile app, and you can only see one here. 14:36:59 But if you look around you can like, find all of your screens expanded off your desktop so like expanded desktop scenarios which definitely there's expanded desktops in Vr but i'll say 14:37:10 that like from a comfort point of view personally speaking myself there's a real difference in comfort between ar and vr because in ar you see the real world. 14:37:20 So if you feel a little nauseous from wearing vr that's you. 14:37:24 Don't have that with ar and then at least with the hall lens. 14:37:28 You can always flip up the visor, which is a big big difference with comfort. 14:37:32 In fact, there is shipped in the windows store, extended desktop app for Hollands, where you can add as many screens as you want to your desktop, and you can use your real desktop and then just 14:37:41 move over and use other screens, and I actually used it for a while, and it was really great. 14:37:51 So I definitely see that taking over and they stay stable enough to, you know. 14:38:00 Read them, and how a resolution I'd say that right now like this is a we call it Microsoft, a point in time. 14:38:11 So at this point in time it's something that is a little bit can be improved. 14:38:15 But, as you know, as that's one of the biggest things that I would say, we would work on improving is the display, resolution, quality. 14:38:28 i'm sorry I but you the name asks. Have you thought about reducing the movement of hands and leverage small micro gestures to save the same interaction that can lower the level of 14:38:41 fatigue. yes, definitely, have definitely, prototyped that and it's really fun to see how it's actually surprising how little movement you can eke out of the hand sensors the challenges with disengaging so once you've 14:38:58 engaged. You can do really small movements. but then again, as you disengage like the finger moves, so that can be a little hard. 14:39:06 But maybe there's like clever ways that you can think of of engaging and disengaging right again when you disengage. 14:39:11 Look back in time to try to predict. When was the last time that you engaged? 14:39:15 Yes, we've definitely I can't say any more detail than that. but it's it's definitely something we've looked at. 14:39:22 I love the name micro gestures, by the way. 14:39:25 So a asks, What sort of interact of audience do you recruit when testing Xr prototypes? 14:39:34 If you recruit people who are X are enthusiasts, they can be biased toward the solutions. 14:39:37 But if you recruit novices they are usually skeptical about new tech, and don't in turn give a lot of unbiased feedback. We we actually at Microsoft. 14:39:48 The great thing is, we have the resources to find novices. 14:39:52 Pre preco is harder, with a much harder with Covid. 14:39:55 But before Covid, for example, the jester elicitation study, we used all novice users because we wanted to specifically know what gestures people would use when they when they basically had no experience for for rapid 14:40:12 prototyping. we do end up using kind of the people in the immediate vicinity which definitely can introduce some bias. 14:40:21 So usually the in industry. What? what I sort of do is Sometimes the answer is really obvious, and you can get an answer from just using people that are that are experienced because there's like obvious bugs So you kind of do the quick 14:40:35 thing to find all the obvious issues and then if there's something that you're really disagreeing on. 14:40:41 Then do a larger study with novice users because most people in the real world are novices to arvr. 14:40:47 So it's important to get those opinions brad are you there still. 14:40:54 Yep: Okay. Awesome. : Oh, a question in the room. 14:40:59 Yeah, I know that you are on support, engineer, but you talk a lot about the incorrect design and the ability tasks. and I also know that as a designer working next, we have a company. 14:41:11 Sometimes they require like coding skills. And I wonder that how do designers and it collaborates in the they are projects. 14:41:22 And also I see them largely. The user experience of they are Project user experience of this program is the end of the implement details. 14:41:31 How you recognize the gestures so if designers don't know how to recognize the gestures, or they find the prototype. 14:41:40 It's probably hard one user to use this program so that's one question. 14:41:46 Oh, that's a great question. I work so much with designers, and in fact, 14:41:51 We always, in almost all these prototypes, I partner with a designer. 14:41:55 So I work was a designer and they're my main tester basically, and I'd say that i'm kind of a hybrid. 14:42:04 So you have to pick a title. But in reality many of the designers are software engineers. 14:42:08 They can code the best, the most successful designers can also code. 14:42:15 Not the most successful, I would say, but many of them can code a little bit, and and many. 14:42:21 Well, I can have a little design background coming from cmu Hcii, and also research. 14:42:26 So it's, always a hybrid so i've I mean all these ideas were made in collaboration with at least one designer, and they would often I know we're short on time. 14:42:39 They'd provide like we together mock up an idea on a whiteboard, or they'd give me some some slide decks, and then i'd go build it as quickly as I can, and minute I have something show it to them and then 14:42:50 they modify their design, and they it can be a whiteboard sometimes it's in Powerpoint. 14:42:58 Sometimes they actually mock it up with a video which can be really effective. 14:43:01 I would say that it's. really important to find partners in general, when working, you have to find like good partners, because you better work together, you complement your skills. 14:43:13 I would say that you can use tools like mixed reality toolkit or the Facebook. 14:43:20 I don't want to meta or whatever has I believe that's in some point. they're coming out with an Sdk that does automatic gesture recognition. 14:43:27 As well, though the mixed reality toolkit is a really good, is a really good starting point for folks that are our are not coding as much. 14:43:37 But really, yeah, it helps a lot to find a partner. We work really closely together, I mean. 14:43:42 And then what designers do I know we're overtime just quickly is they help promote your prototype and your ideas, and they can kind of express your ideas in concise, very good ways. 14:43:53 Like all these, all the images of the design principles, is a somebody named Tony Tang. 14:44:00 He condensed everything into into a way to communicate it. 14:44:03 And he thought of the the whole model whereas while I was focusing on the details of the prototypes. 14:44:08 So it's a really it was a really important that we work together on it. 14:44:13 Yeah, I think that's it. Unfortunately, yeah there's one more question maybe you could avoid reading the student's name. 14:44:22 We're actually sorry about that when you were prototyping and choosing the tasks to prototype. 14:44:29 What is the rationale? Selecting these tasks? 14:44:32 Specific use cases. In that case I work with a designer, and we actually did the shotgun approach where we tried to think of as many different ways. 14:44:43 We tried to think of every user control of every control we could think of. So we weren't thinking of specific cases. 14:44:48 We actually wanted to go as broad as possible. And so, really we just sat together in a room and whiteboarded it out, and then she would also come and give me more ideas. 14:45:01 So that's that's the rationale We just tried to think of as many different ways as possible. 14:45:10 We're rate of time. So that was really so what so relevant and exactly fitting in with all the other things we've done in the past.