I have a love hate relationship with autocorrect. When you type gibberish and it automatically converts to what you really meant you let out an “ahhhh”, but when you type something correctly and the system gets in your face and switches it over to something else you get mad. I find that I often know when this is going to happen. When “almaer” turns to “Almaer” in a case sensitive field I growl loadly.
Google acquired BlindType (video above), which is one example of dealing with this particular issue, and it is great to see technology in this space (e.g. Swype). I am sure we will see a lot of new technology come around to make life easier for those who want to quickly navigate around, or get a bunch of content into the device. Have you ever wanted to reply to an email while mobile with something of substance and be frustrated, knowing that you will have to hammer out the thing on a small keyboard? I have. Using voice input has been very mixed, to a point where I don’t try it.
I got thinking about how I would love to be able to use voice actions whilst typing. After all, we do that in other ways. Look at an Italian using his hands as he talks, or a radio DJ make changes as he raps. My particular urge was to say “set spellcheck off” as I typed something onto the device that I knew it would want to correct. Once I did this once, it kept coming to mind as I did other things. When in the browser opening new tabs, sometimes I want to open them up in the background, but other times I want to open them in a new tab but jump right to it. The browser could give me a slightly different key combo for that, or I could say something like “jump” as I complete the action in question.
Once you know that you don’t have something that you want, it is frustrating. I showed someone how they could hold down the home button on iOS and speak to the system “call Dion Almaer mobile” worked like a charm. They then tried “open Facebook”, but to no avail. It feels like it is only time for systems to open up voice as a first class citizen here. Any third party application should be able to add their own voice commands into the substrate.
Once again I feel like an ape at my computer. I can poke and ug (point and click), and I am starting to be able to do more touch, but let me speak and use other senses! SmelloKit where are you?!