Here is something I've been meaning to try for years. Higher Bias settings make the line end a bit sooner than lower bias settings. By rendering with three different Bias/Thickness combinations and then compositing the three versions in Photoshop with the "Darken" mode we can get something that resembles the hand tapered line of illustrations...
Here is an overview of multi-touch gestures as the exist in Windows. A few of them should get a result even without new programming but thier meaning may not be the same as the mouse equivalent is in A:M.
Windows Touch Gestures Overview
Yeah, that was back when that was the only character they had!
Although, technically, the assignments said "Using Bishop, or an equivalent character..."
Maybe I coulda just put Shaggy or Rabbit in there.
For our first dialog assignment we only had mouth open and close and that did a lot...
YoureRight242b.mov
Most of "lip-sync" is really the body language and that's what wears me out.
If you can get a cheap copy of Jason Osipa's "Stop Staring" he teaches this fundamentals approach to lip synch. His first edition even has A:M coverage but ignore the specifics of that because he was on an old version that didn't have CP weighting like we do now.
Yes, some things woudl have to be rethought. I can imagine doing modeling on a tablet with finger pointing but animation with the timeline... maybe it shoudl just stay impossible so no one starts making their animators do work on the subway ride home.
When I was at Animation Mentor, the "Bishop" model had up and down for the jaw and control at each corner of the mouth that you could move in for "oo", out for "ee", up for smiles and down for frowns and that is what I did for my A:M version of Bishop.
They didn't teach "phonemes" it was more about the mouth motion.
I imagine it's possible.
I presume MacOS and now Windows have some library of code that reads the screen and converts gestures into messages to the operating system but I dont' know much about it or how one would code A:M to be ready for both. It's something Steffen would have to figure out and I don't know that we have enough tablet users to warrant it yet.
I had thoughts along this line when I tried using A:M with a pen on my cintiq and I found I could get by without right clicks or keyboard by using buttons in the interface that I typically never touched.
But the buttons would be too small for finger pointing I think.
On mouth animation... I would not spend a lot of effort on phonemes. i would have the mouth be able to do the "ah" and "oo" and "ee" shapes and those will serve better than trying to do automatic lip synch with phonemes.