This week we finalized the set-up. We have a local server which will relay messages from the app to the home, vice versa. The server was built on a MEAN stack. This means it will use MongoDB as a database, Express and Angular as the frameworks for the server applications, and Node.js functioning as a sort of operating system on which everything to runs. Interaction between the app and the server is done via a RESTful API.
Morgan and I played around and figured out how to send requests to the server using HTTP methods but we couldn't figure out how to parse the response messages. We also needed to determine the content and format of the server's response messages. We brainstormed with Dr.Anderson and decided that the app should expect to receive a json with a list of rooms that have voice-capable devices. Each room would catalog its voice-capable devices and each device would hold data such as its name, current state, and available commands. With this established, Morgan and I can actually code a decent portion of the app.
Coding voice recognition for Windows apps involves two parts. The first is activation of the app via a command. This means the app launches and performs an action when it given one of it commands through Cortana (i.e. outside the app). The second is responding to in-app commands, that is, once inside the app, reacting to voice commands directly (no Cortana intermediary). Morgan and I decided to split these tasks. She'll implement voice recognition via Cortana. I'll work in the in-app voice recognition. While Morgan I work on the app (the client side), Dr.Anderson will be working on arranging the server (1) to organize the data more helpfully and (2) to send messages in the our decided format.
It hard to believe that we're already half-way through the internship. It's been exciting and I'm learning a lot. Being at University of A's been a great experience, so far. There's so much more to do. I'm excited for the next half.
Morgan and I played around and figured out how to send requests to the server using HTTP methods but we couldn't figure out how to parse the response messages. We also needed to determine the content and format of the server's response messages. We brainstormed with Dr.Anderson and decided that the app should expect to receive a json with a list of rooms that have voice-capable devices. Each room would catalog its voice-capable devices and each device would hold data such as its name, current state, and available commands. With this established, Morgan and I can actually code a decent portion of the app.
Coding voice recognition for Windows apps involves two parts. The first is activation of the app via a command. This means the app launches and performs an action when it given one of it commands through Cortana (i.e. outside the app). The second is responding to in-app commands, that is, once inside the app, reacting to voice commands directly (no Cortana intermediary). Morgan and I decided to split these tasks. She'll implement voice recognition via Cortana. I'll work in the in-app voice recognition. While Morgan I work on the app (the client side), Dr.Anderson will be working on arranging the server (1) to organize the data more helpfully and (2) to send messages in the our decided format.
It hard to believe that we're already half-way through the internship. It's been exciting and I'm learning a lot. Being at University of A's been a great experience, so far. There's so much more to do. I'm excited for the next half.