Speech recognition has advanced quite a lot in the last couple of years. So has network speed and easily available natural speech parsing systems. There's a lot more involved than just "listen to mic, turn words into text, google for text". And that second step isn't particularly easy, and was in a much worse state back in 2009, 2010 when there were some fairly significant advances in that.
Originally Posted by PalmPixi_User23
But, I think what most services are doing (I could be wrong) is just sending the straight audio to a processing system on the other end of the internet, which then turns it into words, then lexes it out for natural language parsing into something that it can try to understand, and then returns the response for searching on that. It's all quite messy.
Theoretically, Autonomy, if it works properly, could have one hell of a good engine for that sort of thing, but I'm not sure how general their stuff is.
I'd love to see it, since people go nuts over voice-to-text-message things, but frankly, I think most of us would end up playing with it a few minutes, then never using it again.