Cool. I once created a speech based speech recognition thing based upon MySQL and SoundEx allowing me to create code by speaking through my headphones. It was based upon creating a hierarchical “menu” where I could say “Create button”. Then the machine would respond with “what button”, etc. The thing of course produced Hyperlambda though. I doubt it can be done without meta programming.
One thing that bothers me is that this was 5 years ago, and speech support has basically stood 100% perfectly still in all browsers since then … 😕
Tech Lead/Team Lead. Senior WebDev.
Intermediate Grade on Computer Systems-
High Grade on Web Application Development-
MBA (+Marketing+HHRR).
Studied a bit of law, economics and design
Location
Spain
Education
Higher Level Education Certificate on Web Application Development
One thing that bothers me is that this was 5 years ago, and speech support has basically stood 100% perfectly still in all browsers since then … 😕
Not in all of them, (e.g. Opera mini, FireFox mobile), it's a nice to have in browsers, specially targeting accessibility, but screen readers for blind people do the job and, on the other hand, most implementations for any other purpose send data to a backend using streams so they can process the incoming speech plus use the user feedback to train an IA among others and without hurting the performance.
...allowing me to create code by speaking through my headphones...
... I doubt it can be done without meta programming.
I agree on this. The concept "metaprogramming" is extense and covers different ways in which it can work (or be implemented) and from its own definition it is a building block for this kind of applications.
For further actions, you may consider blocking this person and/or reporting abuse
We're a place where coders share, stay up-to-date and grow their careers.
Cool. I once created a speech based speech recognition thing based upon MySQL and SoundEx allowing me to create code by speaking through my headphones. It was based upon creating a hierarchical “menu” where I could say “Create button”. Then the machine would respond with “what button”, etc. The thing of course produced Hyperlambda though. I doubt it can be done without meta programming.
One thing that bothers me is that this was 5 years ago, and speech support has basically stood 100% perfectly still in all browsers since then … 😕
Not in all of them, (e.g. Opera mini, FireFox mobile), it's a nice to have in browsers, specially targeting accessibility, but screen readers for blind people do the job and, on the other hand, most implementations for any other purpose send data to a backend using streams so they can process the incoming speech plus use the user feedback to train an IA among others and without hurting the performance.
I agree on this. The concept "metaprogramming" is extense and covers different ways in which it can work (or be implemented) and from its own definition it is a building block for this kind of applications.